Monday, July 27, 2015

Abortion, Animal Rights, and Moral Consistency

Ok, I can't take it any more. I'm supposed to be working on a paper but the recent flood of articles regarding Planned Parenthood and abortions are making me loco--but not for the reasons you might think. I'm not going argue for a position regarding abortion but I just want to point a few things out in regards to how a position on the abortion issue "bears" on animals rights if there is going to be a modicum of concern given to moral consistency.

Before continuing I just want to emphasize that this article isn't intended to engage with all of the philosophical literature on abortion--that would require a book or more. The intent is to look at some the most common reasons given for opposing abortion and how they relate to animal rights if moral consistency has any value. At the end I'll briefly
"flip it and reverse it" and suggest how a position on animal rights bears on abortion.

The Basic Argument
One of the most common arguments against abortion is that "it is murder." The argument goes something like this:

P1. Abortion is killing an innocent person.
P2. Killing an innocent person is murder.
P3. Murder is wrong.
C.  Therefore abortion is wrong (via transitivity).

Of course the argument only works if you accept P1 which is in fact where the real debate is. Is a fetus a person? And if so, what attributes confer personhood?

Often what you'll hear is that the fetus is a person because they are human and since everyone agrees it's wrong to kill innocent humans, abortion is also wrong. But being human is simply a biological category. What we want to know is what attributes the fetus has that makes its interests worthy of moral consideration. Saying "because it's human" only redescribes what we already know. Nobody is doubting that the fetus is trivially biologically human. We still need an answer to the question "what morally relevant attribute do human fetuses have that makes it so it's wrong to kill them?"

Taking a step back, this is one of my favorite things about philosophy. We ask questions for which the answer seems so obvious that no reasonable person would even think to ask the question in the first place, yet once we ask the question the answer doesn't seem so obvious after all. In this case, the general question is "why is it wrong to kill humans?". As should be clear now, answering "because they are human" is not very satisfying.  Duh! We know that! Surely, there must be something about humans that makes it so it's wrong to kill them. What is it?

One popular answer is "rationality". Ok, suppose we accept that. Is a fetus rational? Nope. Are some adult mammals rational? Yes (maybe they can't do upper division math but they have a minimal rationality that we recognize in human children). So, if rationality is truly the standard for moral consideration of interests, it seems like we should have less of a problem with abortion than we do with killing pigs and experimenting on primates. Pigs and other adult mammals are orders of magnitude more rational that a fetus--which isn't even rational--and at least as rational as young children.

The obvious reply is that a fetus is potentially rational. It will one day be rational and since rationality is what makes it so we shouldn't kill humans, we shouldn't kill the fetus. One problem with this reply is that it's not clear how potential properties confer current rights. If I will potentially be a landlord does that mean I should get all the rights a landlord has now? Does the fetus that will one day be a university student get all the rights of a university student now? We typically don't give children full rights of adulthood until they have the capacities to exercise those rights. How do you get rights for capacities you don't currently have? That seems a bit odd.

But let's grant that that you can somehow get rights based on your potential attributes, in this case rationality. Is rationality really the measure of moral consideration? Consider: a child poet and an adult logician are both about to die and you can only save one. Is it so clear that you should save the logician? Although rationality seems to play some role in whether we confer moral consideration, it doesn't seem to be the most important consideration. If it were, we should give more moral consideration to adult mammals than fetuses since adult mammals are more rational.

And, even if we grant that a fetus can have rights in virtue of a potential attribute, surely we should also take into account rights that derive from that same actual attribute.  In other words, if we want to say that a fetus has certain moral rights in virtue of its potential rationality, consistency demands that we also say that, in so far as living animals are actually rational, they have rights commensurate with their actual rationality.  It would be a strange moral theory that confers greater moral status commensurate with potential attributes than actual attributes.

You can run this same argument for potential and actual desires (to live). Although an animal might not be able to express it verbally, it's reasonable to infer from its behavior that it would rather live than die. Does a fetus have desires? Nope. Ok, so we can go the potential or future desires route but accepting this would seem to require us to also accept the actual desires of animals not to be killed.

OK, so a fetus isn't rational and maybe rationality isn't all there is to having moral status. Maybe the capacity to feel pain is what confers moral consideration? At least in the early stages of development, a fetus is incapable of feeling pain since it has no central nervous system. Animals, on the other hand, do feel pain, so, if pain is the marker of moral consideration, we should give moral consideration to living animals rather than to fetuses. Again we can appeal to potential pain (?) if this even makes sense. Even if we allow it, it seems as though the actual pain of animals should be weighed at least as heavily as the potential pain of a fetus that doesn't ever live to feel that pain--if that even makes sense.

"But a fetus has a beating heart!!!"  Perhaps after 6 weeks this is true. But again, suppose we accept that having a beating heart is what confers moral status and make termination impermissible. Animals  have beating hearts too and so too must have moral status and termination of their life is also impermissible.


Another possible answer is that it's wrong to kill a fetus because it has human DNA. First of all, this criteria is question-begging. We already know that the human fetus has human DNA. What we want to know is why merely having human DNA confers moral status. My fingernail clippings have human DNA. Do they have moral status? Maybe it's replicating human DNA that has moral status. But why? The various organs in my body all have replicating DNA, do those cells have moral status? That seems weird.

Life Begins at Conception
If we charitably employ the term "life" this is trivially true in a biological sense but of course mere descriptive biological facts don't necessarily imply moral conclusions. Typically, for something to be considered alive in a full sense we'd think some degree of self-sufficiency would come into play. Anyhow, is "being alive" all that's required for moral consideration of interests? If that's the case, all animals should also have their relevant interests considered in proportion to how alive they are.

"No! No! It's different because it's human life." Ok, fine. Tell me again what morally relevant attribute a human fetus has that other creatures don't have. And saying "because it's human" again and again doesn't answer the question. It merely redescribes the biological facts but says nothing of the moral facts. We need an answer to the question, "what morally relevant attribute do human fetuses have that living adult animals don't have?"

Life Begins at Conception and In Vitro Fertilization
Although not directly linked to the issue of animal rights, one of the most glaring inconsistencies with the anti-abortion movement is their silence on in vitro fertilization.  With most in vitro fertilization usually 8 eggs are fertilized. Do you think that every couple that engages in in vitro fertilization uses all 8? Nope. Maybe they'll use two...(unless you're Octo-mom).

Now, if those that argue that moral life begins at conception take their position seriously they should be protesting in vitro clinics rather than abortion clinics. For each person they prevent from going through with the fertilization procedure they save 6 or 7 human "lives" rather than a measly single life at an abortion clinic.

The reason why they would never do this is because politically their cause would fail and it wouldn't surprise me if at least some people who oppose abortion have used in vitro. Nothing like your own needs and desires to motivate a special pleading argument or initiate motivated reasoning.

To be fair, some in vitro clinics have gotten around this "inconvenient truth" by freezing whatever embryos aren't used. "Hey, we never terminated them, we just froze them forever"--or (more realistically) at least until we forget about it. Anyone with any intellectual honestly should see what a cop-out this is.

The Bottom Line
When we talk about rights we usually think of rights in terms of particular capacities. We don't give children the right to vote or to drive because we don't think they have the relevant capacities. When they develop those capacities they gain the relevant rights. Similarly, we don't give men the same reproductive rights as women because biologically they couldn't exercise these rights. If this is our model of rights (i.e., capacities) then it seems odd to confer rights to something without any relevant capacities. We can of course say that it has the capacity to live but this doesn't distinguish it from any other living thing and so consistency requires either we reject the argument or we confer those same rights on those other living things. 

And so, if anti-abortions were sincere in their arguments consistency demands that they just as sincere in their advocacy and protection of animal rights. In short, there should be no meat-eating anti-abortionists.

[Philosophers note: the capacities theory of rights isn't the only theory of rights.]

Flip it and Reverse It
Notice that the consistency requirement works the other way too. If you have strong views against killing animals yet are pro-choice, your views may be inconsistent depending on how you defend your position on animal rights and the stage in a fetuses development up to which you think abortion is permissible.  If you think abortion is permissible at a stage in its development where it has some of the attributes that are shared by living animals, then your position is likely to be inconsistent. Or if you think it's just wrong to terminate an animal's life prematurely for any reason then your case for the permissibility of abortion is paper-thin if moral consistency matters--especially if you are OK with late term abortions.

One other puzzle that pro-choice advocates have to deal with is to come up with a morally relevant criteria that distinguishes a late-stage fetus in the womb and a new-born infant. Well, let me qualify that. The distinction has to be made so long as the pro-choice advocate thinks it's wrong to terminate a healthy new-born but permissible to terminate a 3rd trimester fetus. What is the morally relevant attribute that the new-born has that the fetus doesn't have? 

And in the interest of fairness, animal rights proponents often point to the gruesomeness of killing animals at the factory level. If gruesomeness is a morally relevant property (which, in my view, is very plausible) late-term abortions are also gruesome and so this gruesomeness should inform our position. To get an idea of what late-term abortions are like, I suggest watching the documentary Lake of Fire which is probably one the best documentaries on the abortion debate.

Wednesday, May 20, 2015

Solving US Health Care Cost Problems: Free Market vs Government Policy. Part 1

I apologize in advance for the somewhat scattered nature of this post. I'm trying to work through some ideas. 

What I'm trying to figure out is whether certain cost problems in the US health care system can be solved using free market approaches or whether solutions require government intervention. The nature of the problems I'm looking at arise as a consequence of conflicting interests between insurance companies, hospitals, doctors, and patients. Any solution has to find a way to harmonize the respective interests of each. Recent evidence suggests government action works, although the case is far from certain. Also, supposing there are equally effective free-market solutions we still must ask why we'd chose one type of solution of the other. Let's get some statistics on the table first to get a general overview of the US health care situation.

Give or take a few, total US expenditures on health care in 2013 was 2.9 trillion. When we average that cost over the population the average per person's cost is $9, 225.00 (2013). To give those figures some context, OEDC average per person health care spending is $3, 448. The next highest spender is Switzerland at $6, 080.00 per person. Despite the familiar mantra of "but we have the best healthcare in the world", the US performs comparatively poorly in terms of many health outcomes. (To be fair, there are also a few areas where it doesn't, such as wait times for specialists and surgeries, and for cancer treatment outcomes.) Currently, health care costs represent about 17% of US GDP and is expected to rise to 22%, whereas the OECD average is 9.5% of GDP, the next highest being the Netherlands at 12%.

If you're like me, your thinking, "wait a minute, I didn't consume even close to $9 000.00 in health care this year. Where is this number coming from?" In other words, we can't just look at averages, we need to know how those costs are distributed across the population. That information will allow us to target cost saving policy at the costliest populations and/or health issues.  Are you ready for your head to 'asplode'?

In a single year, what percentage of total health care dollars spent do you think went to the top 5% of health care users? (I.e., the sickest people). Ready? About 49%. That's right. Just 5% of the population consumed almost half of all health care dollars spent in a year.  Now, what percent of the total health care dollars spent do you think went to the top 1% of health care consumers? Ready? The top 1% of health care consumers consumed about 30% of the total health care dollars spent in a single year.

Ok, let's look at the other end of the spectrum. What percentage of total health care spending did the bottom 50% consume? (I.e., the healthiest people or those with the cheapest conditions to treat). Ready? It's 3%. Yup, 50% of the population only consumes about 3% of the total health care dollars spent in a year.

(For more fun facts about the distribution of health care spending, here's a--wicka wicka--breakdown)

So, why--beyond shock value--should we care about these statistics? Because if we're going to set policy to decrease health care costs we're going to get way more bang for our buck if policy is directed at the top 5% of users rather than everyone all at once. There's very little to be gain by reducing the health care costs of the healthiest 50% whereas there are very likely cost savings available from the top 5%.

We might ask why treating this population is so expensive. To figure this out we need to know if their treatment is expensive because of the nature of the conditions for which they need treatment or the way the conditions are treated/billed/managed or they're always in and out of hospitals or it's some combination of all of the above.

It turns out that the 5 most expensive conditions to treat are heart disease, cancer, trauma, mental disorders, pulmonary conditions. But if only a small percentage of the population has these conditions, they won't account for the high costs. What we need to know is both what conditions are the most prevalent in the population and of those, which are the most expensive to treat.

A quarter of the population has at least one of these five chronic conditions: diabetes, heart disease, asthma, mood disorders, or hypertension.  Unfortunately, each of thes conditions is associated with other conditions and illnesses. Treating the primary conditions in conjunction with the associated illnesses accounts for 50% of all health care spending.

How do we put all this information together? If we want to figure out a way to reduce costs, clearly we want to go after the most costly people and conditions to treat. And if those two variables overlap, that's probably a good target. So, how should we do it? Does government need to implement some sort of policy or are there free market solutions? To answer this question I want to use as a case study of one hospital's method of reducing treatment costs. There are other successful models for cost reduction as well which I'll also look at briefly. The point I want to establish is that it is possible. Not only is it possible, but these successful models reduced costs, increased quality of care and health outcomes.

What I really want to know is whether these models were a consequence of government policy (i.e., the ACA) or whether they could have come about without a government mandate. If the reply is the latter, we must ask the obvious question: Then why didn't it happen pre-ACA? The pro-market person can correctly point out that it did in a very small handful of cases. Just look at the Mayo clinic and an HMO in Colorado. But there's a further question lurking. If these models were so successful, why weren't they copied? Presumably, when someone finds a superior business model, other businesses must copy it or lose out.

The pro-market person might reply that the regulatory environment pre-ACA interfered with market forces such that efficiencies weren't realizable. But their own example--the Mayo clinic seems to undermine this argument. I'll have to investigate this claim. On the other hand, the ACA (government action) brought in (Medical) Accountable Care Organizations (ACOs). ACOs are voluntary programs that reward Medicare and Medicaid providers (e.g., hospitals, clinics, doctors, etc...) for cost savings through innovation.  Health care economists and pretty much anyone else that works in health care policy have known for decades that preventative medicine and coordinated/managed care (this is when groups of specialists are paid as a team to manage patient outcomes) is the best way to bring costs down and improve patient outcomes.  So, why wasn't anyone doing these things pre ACA? Why didn't market forces converge on this more efficient model?

My answer is that there were two prisoner's dilemma-like situations. The first is between health insurance companies, the second is between doctors and hospitals. Let's take a look at the first. The health insurance business is an odd one.  You're trying to sell a product that you hope your customer will never use.  And so, the best customers from the insurance companies' point of view are the healthiest ones. Every one wants the healthy customers. No one wants the sick ones. Here's the deal with preventative care and managed care programs: Setting up these programs requires up-front investment that won't see returns for up to 5 years.

Here's the problem. If you're the only insurance company that invests in a preventative and managed health system you're going to have lots of healthy customers. This might seem good until you realize that you're the only one that invested in the program. What are the other companies going to do? They're going to try to poach your customers! You invested all that money and created healthy customers and now all the other insurance companies are going to swoop in and steal all the healthy customers you created. Seeing how this might happen, no insurance company wants to be the sucker--even though they want to have healthy customers! And so no insurance company makes the up front investment and we end up with no cost saving measures.

It looks like the only way you can get the health insurance companies to make the up front investment in these cost saving measures is if somehow "someone" gives each company assurances that all the others will do the same. This way, no one will end up a sucker by being the only one to make up front investments only to have the new healthy customers poached. It looks like the government is that "someone" who can offer the assurance by mandating preventative care be part of health care insurance policies. Doing so allows the insurance companies to exit the prisoner's dilemma.

The second prisoners' dilemma occurs between doctors/care providers and insurance companies (I'm less sure if this is technically a prisoner's dilemma, it might just be a more general case of conflicting interests). Doctors and care providers (e.g., hospitals) want to get paid more money rather than less. Insurance companies want to pay less rather than more. This leads to high costs. For example, if a doctor is charging for every test he orders and every minor consultation, he's likely to order more tests than is necessary. In fact, there's good evidence that this happens. Consider this data from 2012. For some types of tests, US doctors order significantly more than their OEDC counterparts.  So, how do we get doctors and hospitals to move to a managed care model? I.e., where they aren't necessarily under a fee-for-service model or at least where they're under a model that doesn't incentive ordering unnecessary tests and procedures?

Again, it looks like at least one answer is through government mandate. ACO programs under the ACA are voluntary for hospitals to enter.  Hospitals get to share whatever cost savings the hospitals generate through managed care initiatives. Interestingly, the hospitals are free to experiment with whatever managed care models they want. So, if the hospitals save medicare and medicaid 2 million compared the previous year, the hospital gets to keep just over half of the savings. The program has been a huge success.  Here are a couple highlights:

In the first year of the program 58 Shared Savings Program ACOs held spending $705 million below their targets and earned performance payments of more than $315 million as their share of program savings. Total net savings to Medicare is about $383 million in shared savings, including repayment of losses for one Track 2 ACO.

In the second year Pioneer ACOs generated estimated total model savings of over $96 million and at the same time qualified for shared savings payments of $68 million. They saved the Medicare Trust Funds approximately $41 million. 

Where We At?
So far it looks like there's a strong case to be made that government action can solve many of the cost problems with the US health care system.  Of course just because the government can solve these problems it doesn't follow necessarily that a market-based solution couldn't solve these problems. Some might even argue that the problems arose in the first place as a consequence of government intervention in the market. I'll look at these arguments in the next post. For now, amaze your friends with your new-found knowledge of healthcare statistics.

Friday, May 15, 2015

Day 3: Psychological Jade and The Arrogance of Ignorance

Before reading this, I suggest reading my post from yesterday since most of what I talk about here relates to it (and I end up retracting most of what I said).

'The arrogance of ignorance' is one of my favorite phrases. I'm not sure of its origins but I heard it first from Dr. Steven Novella. I think the phrase is the best way to capture a cluster of common cognitive errors. In no particular order: a) Assuming that because you are knowledgable in one domain that you know a lot about another (or are able to correctly evaluate another). b) Moving from small data sets/anecdotal experiences to broad conclusions. 'The arrogance of ignorance' is a close cousin to the Dunning-Kruger effect: You have so little knowledge of a particular domain that you are unable to assess how little you actually know and grossly overestimate how much you do know, in turn leading you to wildly wrong conclusions.

Anyhow, yesterday I was guilty of all of the above crimes. You'd think a guest pass to a hospital and a few hours of observation would give me enough authority and knowledge to correctly evaluate an entire subfield of medicine. Strangely, it didn't.

Today I went back to inpatient psych, and boy am I glad I did (from a pedagogical point of view). Let me try to both convey my experience and undo some of the misconceptions I had.

Sample bias
Most of the patients I met yesterday had been in the hospital for over a week. I was meeting them after they'd undergone treatment and had been stabilized. Of course they seemed normal to me! As I learned (and hopefully you will too once you read this post), what I did was the equivalent of walking into a surgery unit and looking only at the patients about to be discharged, then asking"why did they need surgery? They look fine to me!"  

So, what are patients like on admission and early in treatment?

Obviously there are a variety of disorders but all of them are severe. Here's the thing, unless you work in a hospital or have someone in your family with a severe mental illness you've probably never actually seen severe mental illness. To most of us, this is an invisible population because most of their lives are lived in care homes or in institutions and, unfortunately, in the streets.

The patients range from very well-spoken with linear thought to having only elementary vocabulary with disjointed unintelligible thought, and any combination of the above. Regardless of where they fall on the spectrum, most of them suffer from severe delusions. Some examples: (1) Being part of an intergalactic group of assassins being pursued by the (intergalactic) mafia, (2) believing that a family member is dead who isn't and all the evidence they have should lead them to conclude the opposite, (3) being pursued by terrorists and (actually) destroying windows and cars to avoid/prevent the terrorist plot, (4) having voices in their head telling them to kill others or kill themselves. There were more but this should suffice.

Interestingly, the ones that had grand conspiracy delusions, e.g., (1) and (2), were extremely pleasant to talk to. If you were to have a conversation with them and the content of the delusion never came up, you wouldn't suspect a thing. It was as though they simultaneously inhabit two realities. When you ask them, they know where they are and why they're there. They'll say "I just want to get better" but at the same time they'll discuss their delusions as though they're just as real as the chair they're sitting on.

Unlike what I hypothesized yesterday, these people don't "just need a little more social and material support." That's the equivalent of saying someone with cancer can be healed with a back rub.

I think my reaction yesterday is probably analogous to what happens with many deniers of modern medicine. They've been in the hospital to visit a friend or they read some article online--maybe even spoken to a disgruntled doctor.  But they've only seen 1 billionth of the data set, and only from the patient side of the bed. Things look very different from the doctor's side of the bed and as you get a larger data set...

Tip of the Iceberg
Another factor that led me to my (wayward) conclusions yesterday was I didn't ask enough questions about case histories. Once you read the case histories, your perspective will change very quickly. Every patient in there has a lifetime history of psychosis that is well documented. Almost all have been suffering the same symptoms since adolescence. Some have their condition for unknown or unknown biological reasons (usually genetic, as it runs in their families), others (there were 2 there) had suffered major brain injuries at some point earlier in their lives and haven't been the same since, others have their condition as a consequence of a life-time of substance abuse. For many it was a combination.

Someone who thinks that a little positive thinking or mere talk-therapy is going to solve these people's problems is extremely naive--like I was yesterday. Someone who thinks along these lines is mistaking people who have one-off breakdowns or depression with this other population. Like I said, unless you work in a hospital or have a family member (or work in law enforcement, probably) it's unlikely you've ever met anyone from this unfortunate population. Until you do, you can't fathom just how serious it is.

Philosophy of Science Lesson: Depression and Jade (Bear with me, You'll See How this Relates in a Moment)
What is jade? Up until the 19th century it was believed to be a kind of mineral. However, a French mineralogist (Alexis Damour) discovered that it was in fact two distinct minerals: Jadeite and nephrite, each with distinct chemical and structural properties. Nephrite is a microcrystalline interlocking fibrous matrix of the calcium, magnesium-iron rich amphibole mineral series tremolite (calcium-magnesium)-ferroactinolite (calcium-magnesium-iron). Jadeite is a sodium- and aluminium-rich pyroxene. The gem form of the mineral is a microcrystalline interlocking crystal matrix.

This isn't Rocks for Jocks 101, so why should we care? I'm getting there. Notice that both structurally and chemically, nephrite and jadeite are different. What does this mean? Well, we know that different chemical compositions will react differently and different microstructures will also also behave differently. Jadeite and nephrite have different fundament properties. From the point of view of science, if we think that science divides and studies the world in terms of its fundamental rather than superficial properties there's no such thing as jade. There's no one structure or chemical structure that is jade.

Here's another way to illustrate what I'm getting at. Why isn't there a science of green things? Why aren't there green-ologists? The reason is that green is a superficial property. Knowing that something is green gives us no predictive power in terms of how other green things will behave. It also provides no explanatory power for why it behaves the way it does. 

For example, green algae has very different fundamentals chemical properties from a green glass. Suppose I put a HCL on the green algae. Based on the chemical reaction, would I be able to predict what will happen if I put HCL on green glass? Does the algae's greenness explain why it reacts the way it does to HCL? Of course not. In science, I want to lump things into categories that are going to allow me to make generalizations and predictions about other things in that category.  

Learning about green algae doesn't help me learn anything important about green glass except what I already knew--they're both green. We don't lump the two into a scientific category because science is only concerned with "lumping" things in terms of shared fundamental properties rather than superficial properties. We ought to "split" superficial categories that contain objects that have different fundamental properties.

Ok, so what does all this have to do with psychiatry and psychiatric diagnoses?  Consider that a common diagnosis we hear about is depression. Most cases we (by we, I mean non-medical professionals) have encountered are probably infrequent non-pathological affairs, perhaps set off by a traumatic event. We know that, with support, most people eventually work through the depression and end up fine. The problem is, 'depression' is psychological jade

Different types of depression can manifest the same superficial symptoms but the underlying causal structures are different. (No, alt-meders, this isn't the same "root cause" you're thinking of but it's the one you ignore).  So, the mistake is to think, "ah, depression...we just need to treat it with x, that's what we did with the last case". But this is to treat depression like jade--i.e., as a homogenous category based only on superficial resemblance.

For example, I learned (the very surprising fact) that for many types of deep depression the most effective treatment is ECT (electroconvulsive therapy)--yes, you read that right! I had to ask the doctor twice because I couldn't believe my ears. Apparently, it's well studied. Of course, the current procedure is quite different from how it was in the early days but still...who'da thunk? 

"Jadists" about depression might think all cases of depression can be treated with ECT. This would be a mistake. There are different kinds of depression with different etiologies (underlying causal structures). It turns out that depression in manic depressives doesn't respond to ECT. Depression has its own jadite and nephrite (and more). The "root cause" of depression in manic depression is fundamentally different than it is in other kinds of depression.

Beside this overview of a famous philosophical argument, why am I talking about this? Because if you're a human being you're probably going to commit the same cognitive error that I made when it comes to psychological diagnoses. You hear that a patient (or someone you know) is depressed or has some other general psychological problem and you think about how you or someone you know dealt with it. You think, well, all they have to do is x (whatever worked for you or your friend). It's so simple! 

But you're treating the diagnosis like psychological jade. The diagnosis might have the same symptoms but it doesn't mean it has the same underlying fundamental structure and thus, there is no reason to suppose it will respond to the same intervention. It's a different kind

Worse yet, someone could dogmatically claim the all disease and/or psychological diagnoses share the the same "root cause". Such dogmatism precludes any chance of recovery since the same ineffective treatment will only be applied more and more vigorously. What's more, this way of thinking is the opposite of scientific thinking. Saying everything has the same "root cause" is just like being a green-ologist. You're confusing superficial similarity for fundamental similarity. To use the lingo of metaphysics, you're lumping when you should be splitting. 

We see green-ology all over the place in alt-med. For chiropractors, the "root cause" of all disease is some sort of spinal misalignment, for Ayurvedic medicine the "root cause" is chakra alignment (or some shit), for reflexologists the "root cause" is something to do with your feet (WTF? How are these people even a thing?), etc... (While I'm pointing the finger I should make clear that I have my own "root cause" default. I have a tendency to lump various problems as being caused a general lack of meaningful social relationships, belonging to a community, and sense of purpose.) And then there's alt-med's favorite: stress. The "root cause" of all disease--physical and mental is stress. More green-ology. 

To be charitable we can say that stress can trigger or make people more susceptible to disease but this is to confuse notions of causation. Let me illustrate. Suppose someone is in the hospital with a broken leg because they got hit by a car. What "caused" the broken leg? Being hit by a car, right? Now, just because the car was the trigger for the broken leg no one in their right mind would believe that removing the car will heal the leg. 

I can just imagine the doctors at an all-alt-med hospital: "We've cured your leg by getting rid of the 'root cause'--the car has been destroyed! You can walk now!"

So, while it's true that stress can trigger certain reactions, it doesn't follow that the solution to the problem is merely to remove the trigger. Yes, doing so may decrease the likelihood of the same event from occurring again, just like not getting hit by a car will prevent you from breaking your leg again (that way); however, this insight is often of trivial value. No one with more than two brain cells to rub together thinks chronic stress is good for them. What's the next great insight? A poor diet isn't good for your health? Revolutionary! Please collect your Nobel Prize.

The Lesson
The causes of many diseases, physical and mental, have to do with their fundamental underlying structural properties. This is why people respond differently to different treatments. Superficial similarities can cause us to lump when we really should be splitting. Overzealous lumping leads to failed treatment and frustrated patients. Overzealous lumpers are green-ologists. Don't be a green-ologist. 

Anyhow, this is just one more cautionary tale for me to heed. Hopefully, it gives you pause too the next time you diagnose someone (including yourself) and assume that superficial similarity implies fundamental similarity...

Also, hopefully this little digression shows the value of philosophy to science. You can't do one without the other.

In a Nutshell
The conclusions I drew in my last post were wrong. But I'm leaving that post up as a cautionary tale to both myself and to anyone reading this. My hope is that it reminds us how easily we can get things wrong when we only have a little bit of information, particularly about areas where we are not experts. People think it's "being a sheeple" to defer to experts. It's not. It's smart and good epistemic practice. Only arrogance fueled by ignorance would lead a person to think that they know more than an expert in that expert's domain.

Thursday, May 14, 2015

Day 2: Naiveté, Falsificationism, and the Counterfactual

Psychiatric Inpatient
Many years ago in my early 20s I read a short story that stuck with me for its nightmarish realism. I can't remember the title or the author, although I think it was G.G. Marquez. I do know it was a Latin American author (if you know the story I'm talking about let me know so I can put a link--it's a great short story). The general plot goes something like this: 

A young woman is touring the countryside and goes into a tourist stop to get some food and use the bathroom. There's also a tour bus full of people at the stop. She happens to exit the store at the same time the tour bus is loading its passengers. One of the "tour guides" asks her where she's going. She says that she's going to her car to drive to the next town. She's on vacation. The tour guide smiles and nods and directs her to get on the bus. She's confused by this and tries to explain again. Again the tour guide gives a similar response, but this time calls over the other "tour guides".  Once again the woman tries to explain that she's not with the tour. They smile and nod.

Anyhow, to make a long story short, the people on the bus were patients from the local psychiatric hospital and the "tour guides" were the doctors. The upshot of the story is that there's nothing she could say to convince them that she didn't belong on the bus (and eventually institutionalized). When she got agitated, they interpreted this as a need for sedatives. When she explained her story, they interpreted this as delusion. 

My naive impression of the psych ward (and psych consult), in some cases, was very much like this. There was no way for the patients to answer  any questions without their answers being interpreted as evidence for some pathology.  In short, a psychiatric diagnosis was unfalsifiable. 

There were, of course, also cases were the patients did have obvious serious psychiatric problems--such as attempted suicide (and usually a history of the behavior)--but some of the patients' behaviors, to my naive eyes, seemed like totally rational responses to their difficult situations. Many were in there after a particularly traumatic and stressful event. 

Since I now have an official visitors' badge I think this entitles me to give a diagnosis. Basically, my visitors' badge plus two days of observations put me at just one level below an expert. Anyway, my evaluation was that there was very little that was abnormal about their response. The major difference between me (and many of my peers) with many of the patients is that we have access to the social and financial resources to weather a storm. Most of the patients didn't. Think about a crisis in your own life. Can you imagine having to go through that without the people that got you through it and, on top of that, having to worry about where you were going to sleep the next day? Very few individuals can bear trauma alone and under additional stressful conditions.

One gentleman's wife was dying. He was functionally illiterate and she had handled all their affairs. He had admitted himself and was very anxious. The staff kept asking questions to try to give a psychiatric diagnosis--i.e., they needed a label for his general anxiety and confusion. He keeps repeating "My wife is dying and I need to take care of her and I don't know what to do." Meanwhile the physicians are asking him to answer math questions and spell words backwards. I literally wanted to scream "Why are you asking him these questions? He's already told you 3 times what the problem is. His wife is dying and he doesn't know how to handle it. How is this pathological? This is the response we'd expect from any normal person."

Here's the thing. What I don't know (and the doctors do know) is the patient's case history. He has a history of various serious psychological problems. While this particular response is normal it could trigger more dangerous responses. And so they adjust his medication.

I don't mean to be an apologist here. Unlike food babe, I don't want to be quick to judge things that I know very little about--despite what my gut tells me (which, incidentally is never wrong--that's a scientific fact). Based on the little I observed, more than anything the guy needs social support. He needs someone to help him manage his wife's care. He isn't literate. How's he supposed to administer the medicines properly? Pay the bills? Manage everything that she had done previously for the both of them? He needs to know/feel that he isn't alone and that there are people that care about him.  Of course, this doesn't necessarily preclude medical treatment to stabilize his behaviors. But it seems to me the emphasis should be on social and emotional support.
. . . .

The psych ward is a strange place. The rooms are barren. No TVs, no radios, no paintings, no cellphones. the patients do, however, have access to books.  Think of a running track. The inside of the track is where the staff are. Most of the desks face outward so the patients can be observed while the staff work. The patients, with nothing to do, walk aimlessly around the island, again, again, and again. They're like zombies. 

This set up strikes me as odd. Why the intentional sensory deprivation? How would you act if you were in a room all day with no art, no TV, no cellphone to text or call your friends, and nowhere to go and nothing to do? I can tell you right now, whatever sanity I had going in would be long gone after a few days. 

Well, that was my first impression. It turns out, they do also get group and individual therapy. There's a stretching class. And there is a TV room--just not one in each room. Not as bad as my initial impression.

I met a woman who seemed quite normal. I can't remember how many days she'd been in the unit. Less than a week. Anyhow, the first time we talked to her, at the end she asked when she could go home. She wants to go home today. She was very polite about it but you could sense the pleading in her voice. The medical student said "I'll talk to the doctor and we'll let you know in an hour". 

We went back with the attending doctor about an hour later. We repeated that previous conversation. "How are you feeling? Are you hearing any voices? To you have any desire to harm yourself?" All these questions were answered just as you or I would answer. Again, she asked if she could go home today. The doctor replied, probably tomorrow. 

I'm thinking: this woman sees totally normal. If I'd asked to go home and someone said no, I'd get agitated (which she didn't). Of course, she probably knows that if she acts agitated it will be further reason to keep her (i.e., problem non-falsifiability).  Anyhow, later I asked the Dr. why she didn't let her go home.

Here's where case history and having more than a visitors' badge are important. When the patient was admitted she was having suicidal thoughts and hearing voices. Of course, I didn't know that part, but given the conditions of the admission, the doctor's caution isn't as odd as it appeared.

And there's more. Every day these doctors must deal with the counterfactual. If a patient comes in (most seemed to be self-admitted) and doctor releases them and something happens--a suicide or homicide--guess who has that on their conscience for the rest of their lives? If you knew that someone in your direct care was a suicide or homicide risk, would you release them the first day they said they felt better? From this point of view, keeping someone an extra day or two doesn't seem so strange.

Wednesday, May 13, 2015

Day 1: First Impressions

For the next month-ish I'm observing rounds in a Cleveland-area hospital. More on how I got here later, for now here are some of my first impressions

From my notebook while I was waiting for the MICU (Medical Intensive Care Unit) rounds to start.

People look tired but are friendly with each other. A janitor sees me sitting by myself in the atrium. "They didn't forget about you did they?"  The visitors badge combined with my nervous fidgeting must be more obvious than I'd hoped.

8:15-noon MICU Rounds
I've always had a lot of respect for doctors but observing the MICU doctors took my respect to another level. As a patient you might think your doctors barely know your name. In fact, they know every minute medical fact about you. They know your medical history. In many cases they know your family's medical history. For every chemical, endogenous or foreign, they know the concentration in your body. If you've been in the hospital for a while they know the whole history of every chemical's concentration in your body. They know what each chemical concentration might indicate about your health. They know a bunch of stuff about you that was too fancy for me to follow or remember. The amount of health-relevant data they have on you is staggering. And like I said, it's not just that they have so much data it's that they interpret and analyze all of it.

Of note: Most of the patients in MICU were alcoholics and/or smokers whose bodies eventually couldn't keep up. Most end up with partial organ failure and chronic infections all at once. If you're a hard drinker or smoker, please stop. The future isn't bright.

1:20pm-5pm Psychiatry Consult Liaison
When patients get admitted into emergency and there are obvious or probable reasons for psychiatric evaluations, these doctors are called in to do just that. For example, if someone suffered an injury that looks like a suicide attempt or someone is delusional or even extremely depressed the psychiatry evaluation team is called in.

To me this was by far the most interesting part from the point of view of medical ethics. Based on the psych evaluation, the patient can be forced to stay in the hospital. For example, if they're perceived as a suicide risk.

I filled half a notebook on my thoughts and experiences doing this round but I'm going to limit this entry to just one anecdote.

As an official observer--unless asked for my opinion--I'm expected to do just and only that: observe. But this isn't always easy.

One person I saw was extremely depressed. They (I'm intentionally using an ambiguous pronoun) were in their later years but still living independently---even working.  They had recently beaten what is for many a terminal illness. They had mustered the strength to get through the treatment. Quite a feat for anyone. They got their life back. They fought for their life back. They were exercising again and even working. Life was looking up.

During the post-treatment there was a complication and their organs failed. Now there is nothing that can be done. All that fighting for nothing. They explained to the doctor what had happened. "Do you want to live?" asked the doctor. "Not like this...not like this. I'm independent and I took care of others and now I can't even take care of myself".  The patient began to cry. I wanted to comfort the patient and hold their hand but all I could do was observe.

Tuesday, April 28, 2015

Evidence for Moral Claims Part 2: Coherentism

Overview of Coherentism
The issue on the table is to figure out what it takes for a moral claim to be epistemically justified. The coherentist offers the following general account of justification. A moral belief is justified to the degree that it coheres with the believers other beliefs. In short, justification is cashed out in terms of a belief's inferential relations to other beliefs. Central to the coherentist position is that there is no privileged set of beliefs that ground other beliefs. All beliefs are justified in terms of other beliefs--regardless of what kind of beliefs they are.

Coherentism is perhaps best understood in contrast to foundationalism. A foundationalist holds that certain kinds of beliefs are--well--foundational while others are either inferred from or not as privileged as the foundational set. For example, in terms of justificatory status, Descartes foundationalism privileges a priori beliefs over empirically derived beliefs. That is to say, if an empirically derived belief conflicts with an a priori (or derived from a priori) belief, the latter belief is more justified because a prior beliefs have more justificatory status (i.e., they're more foundational). In contrast, the coherentist would say that the best justified belief is the one which best coheres with the totality of your beliefs--regardless of kind.

I want to emphasize that the coherence theorist (of the type advocated by Sayre-McCord) isn't a theory of truth rather it is a theory of justification.  Coherence doesn't make beliefs true, coherence simply justifies a belief for an agent.

Emotional Experiences, Justification, and Coherentism
In my previous post I argued that it's plausible that our emotional reactions to certain situations play an evidential role for moral beliefs. In short, in some cases we use emotions to support moral claims or emotional experiences can change our moral positions.  If we think that emotions can sometimes count as evidence for moral claims then, I will argue, coherentism isn't able to capture this type of evidence in its account of how moral claims are justified. In other words, coherence is sufficient for justification but not necessary. The reason for coherentism's difficulty in accounting for emotions as evidence is that (obviously) emotions are very different sorts of things as beliefs. And, on a simple version of coherentism, only beliefs can justify other beliefs.

Let me briefly make explicit the problem. Beliefs--moral or otherwise--are propositional in nature thus can, in principle, have a truth value. Emotions, on the other hand, are not propositional and so cannot have a truth value.  So, (obviously) emotions are not the same sort of thing as beliefs. There doesn't seem to be any obvious way on the coherentist model that something non-propositional could count as evidence for a proposition because the only thing that can justify a belief is its relationships to other beliefs.

Coherentism and Perceptual Experience
Well, not so fast. In some cases--namely perceptual experience--it doesn't seem far-fetched to say that something non-propositional could justify a proposition. For example, the experience of seeing red seems to support the proposition "I see red".  Or maybe the experience of stubbing your toe plausibly supports the proposition "I'm in pain".

So, even though perceptions aren't--strictly speaking--propositional, the tight connection between direct observation and propositional beliefs should make us willing to allow such experiences as able to justify certain beliefs. (More on this later.)

However, even if we allow perceptual experience to justify beliefs on the coherentist model two problems arise. The first problem is general to all types of experience as evidence and the second regards only extending coherentism to allow emotional experience to play a justificatory role. Let me address the first problem...uh...first.

The Isolation Problem
Suppose all my life I've believed that brick walls are soft.  Maybe my parents are the world's greatest pranksters or I was home-schooled or something...Anyway, I've never actually seen a brick wall but I've read a lot about them. As an toddler I was lulled to sleep with stories about their pillowy softness. At dinner my father would regale me with tales of the Great Wall of China which is the longest man-made pillowy structure. In short I have many many beliefs about the pillowy softness of brick walls.

Anyhow, one day I go off into the world and encounter a brick wall, which is perfect timing because I'm really sleepy and could use something comfortable to rest against. I can't wait to feel that pillowy softness. I triumphantly yell "Geronimo!" and sprint head first into the brick wall. To my shock and awe, the wall does not feel pillowy soft. In fact, it feels as hard as a cloud (my parents also taught me that clouds are very very hard).  On the coherentist model, despite my experience of hardness, I shouldn't reject my belief that brick walls are pillowy soft. Why? Because I only have one belief "walls are hard" yet I have many many more beliefs that brick walls are pillowy soft and that support "walls are pillowy soft". If justification is a matter of coherence with other beliefs then the latter belief coheres better with the totality of my beliefs than the former.

In short, a problem for coherentism is that there will be at least some cases (perhaps not as far-fetched as my example) where we think a single experience-derived belief warrants over-riding many beliefs. However, on the coherentist model, we shouldn't do this because the single belief won't cohere as well with the totality of beliefs as will the contrary proposition. The single (recalcitrant) belief will not be justified.

It may be that a sophisticated account of coherentism can handle this objection but it's not clear that it can. I'll leave the matter as it is for now and move to my next point.

Emotional vs Perceptual Experiences
Earlier, I suggested that we should allow coherentism to admit perceptual experiences as being capable of justifying beliefs because of the short inference from the experience to the belief. In fact, Sayre-McCord says as much:

It's true, coherentism doesn't allow experience as relevant to justification unless and until the experience comes into the person's cognitive economy. Yet, especially in its recognition of cognitively spontaneous beliefs, coherentism leaves room for experiences to enter that cognitive economy unbidden, either thanks to the experiences themselves having cognitive content (in which case it is the content of the experience that serves as evidence or by their being the content of an appropriate cognitive attitude (in which case it is the fact that such an experience occurred that serves as evidence). (Ibid, p. 122)
Ultimately, the coherentist is able to admit experience as evidentiary but its justificatory status will still be cashed out in terms of its relations to other beliefs. Let's accept this. Now, what about emotions? They're a kind of experience and so if we've admitted experience it looks like my objection has been met.

Nevertheless, I want to argue that the category "experience" is too vague and is unable to satisfactorily capture both emotional and perceptual experiences.  Lumping both into the same category relies an an implicit analogy between the two types of experiences. However, it's not obvious that these two types of experiences share any properties relevant to our purposes. For example, the inference from the experience of seeing a red pen to the belief "I see a red pen" or "the pen is red" is short and obvious (philosophy of perception aside). It's not going to be so obvious in the case of emotions.

Perception is, at its core, representational, yet it's not clear what it is that emotions are representing. Here's where I think the analogy falls apart.  Because of this disanalogy, it's not obvious to me how an emotional experience gets us to a propositional belief.  Think back to the three ways I suggested emotions function as evidence in our moral reasoning. In the first way we ask our counter-part how they would feel if they were in such and such circumstances. In the second, a feeling (e.g., love) overwhelms a consistent set of beliefs (e.g., that gay marriage is wrong)In the third type of case, visceral images or experiences evoke strong emotions such that we come to endorse a position inconsistent with all our other beliefs. Suppose we are in a debate about the moral permissibility of drones.  For example, a proponent of drones might be shown interviews and footage of what families in a small Yemeni village endures on a daily basis as a consequence of drones and reverse his endorsement. To be clear, these interviews and images don't come in the form of argument, rather they are a montage that elicit certain powerful emotions.

In none of these cases is it clear how the emotional experience translates to a propositional belief. (I need to expand on this). And even if we can give some sort of account of how it might we're left with the isolation problem from above. There will be cases where all my previous beliefs cohere best with my previous position on the issue at hand yet the emotional experience over-rides them. The coherentist model tells us that I should reject my new position.

I suppose the coherentist could (coherently) reply that, "yup, you aren't justified in endorsing your new position because it doesn't cohere with the totality of your beliefs." At this point I'm not sure where to go. It looks like the issue turns on who can pound their fists hardest on the table.  I want to say that the person's post-emotional experience view is justified by the experience while the coherentist will pound back, with equal vigor, no it isn't!

It seems like we've ended up back at the normative/descriptive divide. As a matter of anthropology, yes, humans do use emotional experience as evidence for beliefs. Whether we should is another matter. And so we return to my conditional claim: If we think that emotions ought to count as evidence for moral claims, coherentism can't accommodate them as such.

1. I'm equivocating on what it is for a belief to be justified and what it is that justifies a belief.

2. What's the phenomenology of adopting a new belief in response to an emotion? How would you characterize it. In many cases it doesn't seem as though any of your other beliefs have changed, yet you adopt a new position.  In some cases, is the emotional reaction simply highlighting or giving greater weight to some beliefs rather than others?

3. Coherentist can say "yup, emotions are important to moral reasoning in that they can get us to reflect more deeply on the beliefs we ascribe to yet the emotions themselves don't justify beliefs." In which case I need to throw this whole paper away and start from scratch.  But still I want to resist this and say (while pounding my fist on the table) that at least in some cases the emotional experience is providing evidence for a new (?) moral belief.

Saturday, April 25, 2015

What Counts as Evidence for a Moral Belief?

For a couple of years now I've been perplexed by the following problem: What counts as evidence for or justifies a moral belief?  Even in asking the question I'm a bit confused because are two possible interpretations of the question (possibly more). We might say that what justifies a moral belief is that it is well-supported by an argument. In other words, it's supported by some other beliefs that are presented in a systematic way. But this isn't a very satisfying answer because inevitably we are going to wonder what supports the justifying beliefs. There's a danger of regress which I'll address later.

What I really mean to ask is, what sorts of things count as evidence for a moral claim? Arguments using other beliefs are one sort of 'thing'. But is that it? Are beliefs the only things that can justify beliefs? I'm compelled by the view that we should admit other sorts of things as evidence, namely, emotions.

At the face of it, it sounds crazy. Imagine you're in court and you believe someone wronged you and the judge asks you to please justify your claim.  "What evidence do you have for your claim that Mr. X wronged you?" he asks. "Well," you reply "it just feels like he did." I doubt your case will do to well.

But let's pause for a moment and think about how, from a very early age, moral education proceeds. One way we do things (both as children and as adults) is that we offer arguments and introduce facts in order to support moral conclusions. For example, when a child punches another child we might explain to him that you shouldn't punch other people because punching hurts them. And hurting people is baaaaaad!

On this model we support our claim "one ought not to punch others" with other beliefs. If you read much of the philosophical literature on moral reasoning you might think this is the only tool available for moral education.

A little reflection reveals that it isn't. I submit that in large part support for moral claims comes from the emotions. That is, emotions led support to moral claims and therefore are a kind of evidence. Let's revisit the child puncher to see why. There's a much simpler--and I would argue--more effective way to get him to see why it's true that he shouldn't punch other people. We ask him "how would you feel if someone did that to you?"  No argument needed.  And I don't think it's unreasonable to suggest that he sees the strength of the moral claim more quickly and forcefully than if only the first method had been used.

Appeals to emotions as evidence aren't restricted to the moral education of children. As adults we appeal to emotions in conjunction with arguments as well as when arguments fail to convince others of our moral claim. In fact, we often use the same technique we employ with children. When arguments fail us, to the recalcitrant party we ask "how would you feel if I did that to you?" When the other party makes a genuine effort to introspect on what it would feel like to have x done to them they might come to endorse the claim that they shouldn't x. The change of heart comes about without appeal to argument or beliefs as evidence.

Here's another way we might think that emotions are being used as evidence for a moral position. Most people are aware of the fact that animals are often mistreated on factory farms. They also believe that it would be wrong to support any institution that mistreats animals. Nevertheless, as is often the case, many people don't come to oppose factory farms until they are actually shown videos and images of what occurs inside some of the farms. 

What's going on here? It's not as though they acquired any new beliefs. The already knew that animals are often mistreated in factory farms and they already believed that it's wrong to support institutions that mistreat animals.  My suggestion is that emotion has played an important evidential role in their position change.  So, more generally, it looks like emotions play an evidential role in moral reasoning. It is, admittedly, another step to say that the emotional response to the horrific treatment of animals justifies their changed belief. I will address this concern shortly. 

I want to suggest one more way in which I think emotions count as evidence for moral claims. Consider a case of a conservative religious opponent to gay marriage (is there any other kind?). Let's call him Dick. Dick dearly loves his daughter. At some point in her adult life, Dick's daughter confesses to her father that she is gay. Fairly quickly Dick reverses his position on gay marriage. What happened here? Did Dick all of a sudden come in contact with a new, never-before-heard, compelling argument for marriage equality? Did he acquire some new beliefs that support the claim that marriage equality is just? Doubtful. It's also doubtful that he just learned that he has the belief "I love my daughter and I want her to be happy."  No, that's not likely it.

Likely, it is his love for his daughter and his desire that she be happy that's doing the work. In fact, it's not implausible that in his entire set of beliefs the only belief that changed was his belief regarding the permissibility of gay marriage.

What I've outlined are a few ways which I think capture how emotions are employed in our everyday moral reasoning. As I alluded above, it is a separate question as to whether emotions ought to be used as evidence; that is to say, whether emotions able to justify moral claims. I don't want to argue for that positive claim here, but I do think it would be odd to say that in our moral reasoning we ought never to take into account our emotions. 

For this reason I simply want to defend a conditional claim:  if our emotions can sometimes count as evidence for moral claims then several contemporary models of moral reasoning can't adequately account all the ways in which we come to endorse moral claims.

In closing, I want to flag some potential problems with my view. First, I want to quickly return to considering emotions as justificatory. From the fact that we use emotions in moral reasoning it doesn't follow necessarily that we ought to. We might think that emotions can lead us to bad moral conclusions, not just good ones. To say that we ought to use emotions as evidence I'd have to show that our emotions get it right more often than they get it wrong.  Or at least pick out the types of cases where they tend to be more reliable than not. I'd also probably have to go through each emotion because some (possibly the reactive emotions) might be less reliable than other emotions. There's no reason to suppose that each emotion is as likely to lead us to a 'good' conclusion.

And while I'm at it, there's a further problem. My account presupposes a moral view. For example, in the gay marriage case we (most people reading my blog) think that Dick's love for his daughter got him to the right answer only because we happen to endorse marriage equality. Someone who didn't endorse that view would say that Dick's love for his daughter blinded him to the truth.  On the other hand, any theory of evidence will have to contend with this same problem. Whether a belief leads one to endorse the 'right' position will depend in large part on what you (dear reader) think the right position is. 

But on the third hand, there's no need to suppose that a belief has to be true in order to be justified. We can be anti-realist about moral claims and still think that for moral claims some things count as better evidence than others and some claims are better justified than others.

Meh...ethics is complicated.