Thursday, December 31, 2015

A Defense of New Year's Resolutions (That THEY Don't Want You to Know About)

Around this time of year my news feed is infected with click-baity articles or facebook statuses about how New Year's resolutions are stupid or bad or whatever and "I'm better than people who make NY resolutions because I recognize that every day, NAY! every second! is an opportunity to redefine myself for the better. So there, you philistines! Shield thine eyes from the light of my transcendence!" 

Anyway, here are a few counter click-baity responses to internet self-righteous anti-NY resolution sentiment. 

Reasons for NY Resolutions THEY Don't Want You to Know About
Reply 1: Fuck you. Get over yourself. 

Reply 2: This may seem odd coming out of a philosopher's mouth but maybe it's cuz it comes from experience: Too much reflection on how to live can be a bad thing. It takes you out of 'just living'. While you're thinking about how you ought to live, life is passing you by--one missed experience at a time (#gradschool). Descartes himself--of the Meditations--reprimands Princess Bohemia for being over-reflective. (Of course in this case it was cuz she had a devastating challenge that he couldn't answer, but let's pretend that it was for some other reason). 

Also, didn't Socrates famously say that the unexamined life is not worth living? True. But it doesn't mean you need to examine it all the time. You have to have experiences to examine in the first place.

OK, so there you have it. Too much navel-gazing can pull you out of your life. You emerge from your contemplative state only to find you've missed out on living. 

Where was I going with all this? Ah, yes. It's actually not a good thing to be reflecting all the time, and so having culturally agreed upon reminders to do so is helpful: No need to do it all the time. You'll be reminded at least once a year to do so and you'll probably do it on your birthday too--especially for each birthday after 30 (e.g., "Where the fuck did I go wrong?""How do I undo this mess?"). 

Make Resolutions and Keep Them Using This One Weird Trick! (Motivational Speakers Hate Me!)
Having resolutions as a social practice leverages the social resources that will make us more likely to keep them. (I said "more likely", not "guaranteed".) Many scienticians agree that the most powerful motivational forces are social forces and NY resolutions engage these forces in various ways. 

First, choosing your goals is a combination of an individual and social act.  The goals we choose are individual to us but they are also often the result of social feedback we've been getting over the year. If you're a student and your professors keep telling you that you need to work harder, this might be a sign that you should work harder. If your friends keep telling you that you don't hang out enough with them, maybe this is sign that you could be a better friend.  If people seem to avoid conversations with you this might be a sign that you need to be a better listener or need to consider your words more carefully. I'm not sayin'....I'm just sayin'...

Resolutions are also social in that we look to others as possible models for the types of behaviors and goals to pursue. It doesn't mean we're going to mimic someone else exactly. Looking to others we admire (or despise!) gives us a point of departure to craft a goal suitable to ourselves and our particular circumstances. It's a way to activate our imagination such that we can see possibilities for ourselves.

As I mentioned, many scienticians agree that the most powerful motivational forces are social. If you publicly declare your goals (to friends, family, to every damn person on social media, or all of the above) you won't want to look like a dumbass by failing. Social pressure can be used to your advantage. Fear of shame is an excellent motivator. But fear of public shame isn't the only possible benefit from publicly declaring your new goals. If you aren't surrounded by a bunch of assholes, some of the people to whom you declare your goals can (gasp!) offer you encouragement and actually help you stay on track in those moments where your own willpower subsides. Confiding our resolutions to (carefully selected) others creates a support network that can help us actually achieve our resolutions.

Of course, some of you are thinking, "I don't need nobody to help me with my goals. It's all about survival of the fittest and I can handle my shit."  Ok, maybe you can. Good for you. You're Jesus. Now, go back to reading your Ayn Rand books in the corner.

The bottom line is that while it's true in theory ("here goes the philosopher with his got-tam theory...") that we can reflect on our attributes and projects at any moment, in practice (that's the science part!) most of us don't. The collective tradition of doing so around New Year's provides a periodic reminder for us to reflect on our past year and the things we'd like to change or accomplish in the upcoming year. A healthy practice to my mind...

I Know I'm not Perfect But How Do I Pick My Resolutions? Use These THREE Weird Tricks!!!!!
For the most part, I think most of us, upon even the most cursory reflection, are aware of where we need "work". It is mostly a matter of committing to the change you want to see. That said, here are a couple of general ideas inspired by a few philosophers for how to pick some resolutions. The first involves self-directed resolutions and the second is other-directed resolutions. 

1. Aristotle: For Aristotle and most of the ancient Greek philosophers, virtues were not only a means to a good life, they were necessary components of a good life. That is to say, you can't have a good life unless you develop your virtues. Quick aside, for the Greeks, a good life meant a "flourishing" life. So, if a flourishing life is something that grabs your fancy, think about a virtue or two that you could further develop. Here's a list of the classic virtues: courage, temperance, liberality (generosity), magnanimity, proper ambition, patience/good temper, truthfulness, friendliness, and modesty.

2. Sartre/De Beauvoir: Sartre says we construct meaning in life through our projects. So, what's to prevent me from creating meaning through morally abhorrent projects? Nothing. You can derive meaning from evil projects, but this wouldn't be an ethical life. The ethical constraint on life projects for Sartre and De Beauvoir is that they must be in the service of freedom. To understand what this means in practical terms we need to take a quick step back and ask why freedom is important in the first place. 

Without going too deeply into it, freedom is morally important because it is necessary for self-actualization. And self-actualization is necessary for a shot at a life worth living. If your own opportunities and resources that are helpful for self-actualization are morally important than so are those of others (unless you think there's something metaphysically special about you). This means that your life project ought to involve in part helping those with fewer opportunities and resources for self-actualization than you. People who live in poverty or don't have access to a good education or healthcare, etc have on average greater barriers to self-actualization. An ethical life is one that includes both working toward undermining those barriers and helping the individuals affected by the barriers. 

In terms of a NY resolution, Sartre and De Beauvoir would say you ought to resolve to include in your core life activities efforts towards helping others gain access to the resources that facilitate self-actualization. In short, find a social cause to include in your life project. Resolve to serve others.

3. Suzan Wolf: We all want a life that is worth living but what does that include? Whatever it is, it requires activities that are meaningful and fulfilling. We might think that if an activity is meaningful it's also fulfilling and vice versa. Suzan Wolf argues that meaningfulness and fulfillment can come apart. 

Let me illustrate. You can have a career that is very meaningful yet not personally fulfilling. Perhaps your parents groomed you to be a doctor or something. You were too young to know how you'd feel about actually being a doctor so now that you are one, you do it. It's meaningful work. You save lives. You cure the sick (contrary to what the idiots at Alt Mama or TinFoilRUs say). Yet, strangely, you don't feel fulfilled. The idea is that despite the fact that an activity or career is meaningful, not everyone will find it fulfilling because of differences in our individual constitutions. 

Conversely, you can find an activity fulfilling yet it is not meaningful. I love to watch UFC. I find it really fulfilling. I feel great after watching people knock each other unconscious.  Is this a meaningful way to spend my time? Probably not.  

So here's an idea for a resolution. Try to align your life projects in terms of meaningfulness and fulfillment. If your career or some central aspect of your life is meaningful but unfulfilling, find a way to make it fulfilling. If you can't, think about finding a new fulfilling activity that is also meaningful. Conversely, if your job or some aspect of your life is fulfilling but isn't meaningful, see if you can make it meaningful. Sometimes this won't be possible and so you might consider changing to an activity in which the two attributes align. 

What I've suggested is no easy task, it means fundamentally rearranging important aspects of our lives. But if we seek a life worth living, it's hard to see how such a life is possible without consistently striving to fill our lives with activities that are both meaningful and fulfilling. This requires making room for them by letting go of those activities where both qualities don't overlap. It won't happen over night. It's a process and an approach to living that requires careful reflection on the various ways we fill our days. 

And New Year's is as good a time as any to start....

Tuesday, December 29, 2015

YOU are a Philosopher (Sometimes)

When people talk to me about studying philosophy, I’m usually met with similar questions. What’s the point of doing philosophy? When can you even use it? Here’s a lengthy paraphrase of what I sometimes say.

You already do philosophy all time and it’s the single most important activity in your life. You just don’t realize it. Every single reflective decision you make is an act of philosophy. Philosophy can be described as the activity of finding reasonable beliefs (then hopefully acting on them). When you decided to eat one thing over another, in so far as you reflected on your choice, you engaged in philosophy. 

Why eat one thing over another? Does taste matter more than nutritional content? Does price matter more than quality? Does one meal look better than another? How important are aesthetics in your choice? None of these questions can be answered scientifically. No amount of beakers and Bunsen burners can tell you whether, forced to choose, you should eat the sandwich or the cookie. What informs your choice are reasons.

You might say, well obviously you should eat the more nutritious item. Maybe. But again this is a philosophical choice. If, in your food choices, you value health above all else, then yes. But should we value health above all else whenever we choose what to eat? Although perhaps trivial, this is a matter of philosophy and it’s something you do with every conscious decision you make. Philosophy is everywhere.

Which courses did you choose for next semester? What career are you pursuing? Do you pursue the higher paying job that you won’t enjoy  yet allow you to live the lifestyle you want outside of work or the job that pays less but fills your life with a sense of purpose? Again, not even the most powerful microscope in the world can tell you what to choose. Your choice will depend upon weighing various reasons—that is to say, engaging in the activity of philosophy.

Which political candidate do you favor? Which political issues matter to you? Why these and not others? These are philosophical choices. Science can’t tell you what to value. In deliberately making choices in these respects—choosing some politicians over others, some issues and positions over others, you’ve done philosophy. 

Do you believe in God or gods? Which one(s)? In so far as your answer is a product of weighing various reasons, it was a philosophical decision. The same is true if you don’t believe in any gods.

So, what’s the difference between taking a philosophy class and the philosophy I just told you that you do every day? Think of it this way. You also exercise every day: You walk to class, you walk to your car. You clean your house. You are exerting yourself physically and expending energy. You are exercising, albeit unconsciously. In the same way, you engaged in philosophy when you chose to read this article instead of watching another cat video on youtube. 

When you go to the gym, exercise isn’t unconscious. It’s structured, it’s systematic, it’s rigorous, and there are determinate goals. Today is leg day. 5 sets of squats, 6-8 reps each and then on to deadlifts.  It’s the same for philosophy class. We’re doing the same thing you already do every day unconsciously except it’s structured, systematic, rigorous, and there are determinate goals. 

This month we’re studying free will. First we’ll read Strawson and systematically evaluate his Basic Argument. Then we’ll read Frankfurt and his compatiblism. Then we’ll look at the Libet experiments and the various interpretations of the results. Even if we get a bit tired we keep going, just like you don’t quite after the 3rd set of squats. You gotta keep squatting if you want good glutes. You gotta keep reflecting and reasoning if you want good reasons to believe one thing rather than another. The more you do it the better you get at doing it and slowly but surely people start to notice your gainz. You get into a debate and you flex your philosophical muscle. People notice. You notice. You critically evaluate some of your old beliefs and realize they’re founded on weak reasons. You change your beliefs. You’ve taken one step closer to Truth by discarding false beliefs. Dem gains are showing. It feels good, and so you continue. Welcome to philosophy. 

Saturday, December 19, 2015

Training Guide for Dogs

Training Guide for Dogs

Suppose you want a treat but your human is frantically trying to get his work done thus not paying attention to your usual manipulations. (Oh, did you think this was a guide for how to train dogs? No, no, no, silly human. It's a training guide for dogs to train their humans) Here’s the one trick THEY don’t want you to know: 

Go stand by the door and whine like you have to go to the bathroom. Your human will have to get up cuz he thinks otherwise you’ll pee in the house. (You already did but he didn’t catch you, lol!). 

Next he’ll let you out, you pee a little to make it look like you had to go then run right back into the house. As you come back in, run over to the treats cupboard and do whatever your usual manipulative behavior is. Your human’s concentration is already broken at this point so he’ll give in and give you a treat just to get you to stop.

In testing this technique I discovered that using it every 5 minutes doesn’t work so well because after a while, the human catches on. Who pees every 5 min? AmIright? For best results I recommend using this trick no more than about every an hour or so. After an hour they’re more likely to believe that you actually have to pee.

For more advice on how to train your human, please submit requests to this blog,


This is how to get treats!

Friday, November 13, 2015

Repressed Emotions: What Are They Supposed to Be? Part 1

Introduction and Two Theories of Emotion
You'll often hear the term 'repressed emotion' bandied about in the self-help industrial complex. In this alt-world, if you have a mental or physical illness, you can bet your bottom dollar that it's caused by a repressed emotion (usually from your childhood).  The concept is often used as the cause and explanation of everything. I'll argue in Part 1 that the concept of repressed emotions as it's used in the self-help industrial complex is unintelligible. However, it does have more respectable pedigree in Freudian psychoanalysis. In Part 2, I'll take a look at the more sophisticated Freudian account.

A Tale of Two Theories of Emotion
Let's set aside repressed emotions for a moment. Before doing a preliminary evaluation of the concept we'll need to take a quick look at the two basic theories of what emotions are. 

I'll begin with what I'll call the cognitivist account. On this view, an emotion is made up of two elements: it's a conjunction of (a) a judgment/appraisal and (b) a feeling/sensation. For example, when I experience anger, the emotion is comprised of 

(a) a judgment or evaluation that person x is disrespecting me or treating me unfairly, and
(b) the phenomenological sensation of anger (tension, heat, etc...).

Or, for example, when I experience fear the emotion is comprised of 

(a) the judgment or evaluation that something is dangerous to me/can harm me, and
(b) the phenomenological sensation of fear (racing heat, strange feeling in the stomach, etc...)

In sum, the cognitivist account requires that emotions have cognitive content (i.e., beliefs, concepts, judgments, appraisals) as well as a phenomenological 'feel' (i.e., a 'what-it's-like-ness'). The ability to distinguish feelings from emotions marks one of the advantages of the cognitivist theory. The difficulty to explain how animals and pre-linguistic children can have emotions counts against this view because it would require ascribing to them sophisticated concepts and beliefs.

The other main theory of emotions argues that emotions are simply a kind of (i.e., a sub-class of) feeling. On this non-cognitivist theory, emotions don't have cognitive content, they are just specific kinds of 'feels'. For example, being angry just is feeling a certain way. Being scared just is feeling a certain way. To have an emotion you needn't have made any appraisal or judgment or have any beliefs about the object/person/situation causing your feeling. It's all in the feels...

A major advantage of this view is that it's easier to ascribe emotions to animals and pre-linguistic children. Making judgments and appraisals requires complex concepts, something not easily ascribed to infants and animals. A problem attributed to this view is the inability to distinguish kinds of feelings. For example, the 'feel' in your stomach when you are scared and when you are angry might be the same. The feeling theory has to say they are the same emotion. There's also the problem of 'an explosion of emotions'. For each slightly different feeling there is a different emotion since what defines an emotion is its feel. Different feel=different emotion.

I won't get into the debate between the two accounts of emotion. There are ways (i.e., there are volumes of articles) that proponents of each view try to respond to the various challenges and criticisms. I merely want to point to some of the advantages and disadvantages with each and provide a framework in which to give a preliminary assessment of 'repressed emotions'. If we're going to say something is a 'repressed emotion' we need some general idea of what an emotion is first...

First Pass: Repressed Emotions Don't Make No Damn Sense
The Cognitivist View: Let's suppose for a moment that the cognitivist view of emotions is correct. An emotion is the conjunction of a judgment/appraisal and a feeling. Suppose my local self-help guru tells me I have [insert illness] because of repressed anger (from my childhood--of course). First of all, on this view, part of having an emotion requires that I have made an appraisal or judgment about some situation or person. This implies that I have a belief

Mysteriously, when the guru asks me if there's anything about my childhood I'm angry about, nothing comes to mind (cuz everyone had perfect childhoods and never got angry about anything).  So, of course he doesn't name anything specific but encourages me to look deeper because there must be something. Why else would I have [insert illness]? 

Here's the thing. If I have an occurrent belief that situation x made me angry as a child, it's not a repressed belief. It's occurrent. So, that can't be the anger that's causing my [insert illness]. The belief has to be buried. It's a belief I can't access--yet it's there! Somehow, I believe something I don't know I believe! 

At this point, some charity is in order. The mind is not totally transparent, and subconscious and unconscious thought and belief are fairly widely recognized phenomena in psychology. So, let's grant for a moment that I have some subconscious belief that I was treated unfairly in situation x as a child. We'll set aside the strangeness of having a belief that you don't know you have.

At this point in the session, the guru will prod me with more questions, getting me to search my mind for instances where I might have believed myself to have been unfairly treated. As though by magic, I find a memory of an instance of being upset as a child! I judged my treatment as unfair when blah blah blah... 

Here's a question: Is this a judgment I formed as a child that I'm now recalling, i.e., is it a judgment buried deep in my consciousness until now, or is it a belief that I have formed now, after much suggestion and prodding? (Because "surely there was something in your childhood that upset you, otherwise there's no other possible way to explain your [insert illness]"). Also, consider how memory works. It's not like retrieving a file in your hard drive. When you "recall" something you are actually recalling the last instance you thought about it then reinterpreting in light of your life's current narrative. How likely is it that what you are recalling from so long ago is actually what happened? (Hint: Not likely).

Again, for the sake of argument, I'll be charitable. Suppose the guru has managed to get me to retrieve (the memory) of this old appraisal about how I was treated. Ok, so far I have one half of what makes an emotion an emotion. I have a judgment/appraisal. 

There's still a problem. If an emotion is made up of a conjunction of a belief and a feeling, I need to 'find' the feeling part. But how can you have a feeling that you don't feel? How does that make sense?

If my repressed anger is indeed an emotion then by definition it has an affective/phenomenological component. It has a 'feel'. But if it's repressed, I don't feel it...otherwise it wouldn't be a repressed emotion, it would just be an emotion that I'm experiencing

Simply put, either you feel a repressed emotion or you don't. If you feel it then it doesn't make sense to call it repressed because you're feeling it. If you don't feel it then how is it an emotion if emotions are constituted in part by a sensation? 

At this point, a proponent of repressed emotions could reply that when I bring the appraisal of my treatment as unjust to my conscious awareness then the feeling will follow. But this reply doesn't seem to work. It looks like the feeling of anger is being caused by the belief. In other words, I didn't have a repressed emotion, I had a repressed belief. When I dredged the belief out from my unconscious mind and onto the main stage of the theater of my mind, I reacted to it (in light of my current circumstances and coaching). The feeling and the belief weren't unified when there were in my subconscious which is what is needed in order for them to count as an emotion (on this view).  

Here, the repressed emotion theorist could give up on justifying the phenomena in terms of the cognitivist view. Maybe emotions just are feelings and I've buried my anger somewhere (?) and that's why I have [insert illness].

Non-Cognitivist View: The argument against repressed emotions under the second theory follows the same logic as above. If emotions just are feelings (and don't involve any appraisals/beliefs/conceptual content) then you have to explain how it makes sense to have a feeling that you don't feel. That's no emotion at all because there's no feeling and the feeling is what makes the emotion what it isLet me repeat that. If what makes an emotion an emotion is that it is a feeling, then it makes no sense to say you have an unfelt feeling. No feeling=no emotion. 

On the non-cognitivist theory, the notion of repressed emotion makes even less sense. 

It looks like if the notion of repressed emotions is going to be intelligible at all, we'll need a more sophisticated account. This means leaving the realm of the self-help industrial complex, going back to Freud himself and examining what he said about the concept. 

In Part 2 I'll look at Freud's own account of the concept. 

Aside on Philosophy
Often non-philosophers will criticize (and mischaracterize) philosophy as being all about asking people "well, what you mean by x?". In some contexts, this philosophical practice can be extremely annoying and not always useful. Allegedly, incessantly engaging in this practice is partly how Socrates got himself killed. 

But, as I hope you can see from above, there are cases where asking this sort of question is useful. Someone postulates a theoretical entity (i.e., repressed emotions) and a philosopher will naturally ask, "well, what do you mean by that?" Spending some time getting clear on our concepts helps to avoid postulating unnecessary and incoherent theoretical entities. 

That said, the more sophisticated Freudian account of repressed emotions won't be so easily dismissed. I don't know yet where my analysis will take me. Anyhow, there is value in carefully analyzing the contents of a concept and placing it within the context of major theories to see how well it fairs. 

Another thing philosophers make much ado about is whether a concept is "doing any work." If you can give an account of a phenomena with existing concepts and without appeal to a new concept, there has to be good reason to add the new theoretical entity. In Part 2, I'll pay more attention to the work Freud intended 'repressed emotions' to do in psychology and psychoanalysis. If there's no other way to capture the phenomena he refers to and do the theoretical work, then it might mean that the theories of emotions themselves have to change to accommodate repressed emotions. 

Until then, keep repressing your emotions.  They can't hurt you cuz they don't exist. 

Saturday, October 10, 2015

Gun Control Policy in Public Spaces

Can you write something about guns in the US of A without pissing off a large portion of your audience? Not likely. But let me try to propose a policy that has elements that both sides might endorse (excluding the most extreme ends of the spectrum).

A little auto-biography
As a Canadian born to pacifist parents, the American fascination with guns has always been mysterious to me. Most of the world probably shares this sentiment regarding Americans and their gun fetish. Why would you want to own something that can--with the simple tug of your finger-- kill another human being? Clearly there is something psychologically wrong with such a person. 

Anyhow, after living in the US of A since 2007, I've come to see that it's not just crazy True Patriots that like guns, it's a lot of "normal" Americans too. In fact, many of the people I've become friends with (and with whom I share political values in every other domain) loved them some guns. This softened my "judgy-ness". Some people just like to shoot things for fun or go hunting. "Piew! Piew!" It's not for me but I no longer see much harm in it. The demographic my friends represent certainly don't own guns with the delusion that they'll someday get to be a vigilante super-hero. 

Let's get to my policy suggestions...

Deez Nuts
The Liberal Media (controlled by Jews who, according to esteemed historian Ben Carson, didn't fight back) loves nothing more than to run stories involving True Patriots accidentally shooting themselves or their friends. While the specific numbers vary between studies, the trend is the same: for every time a gun is legitimately (and successfully) used in self-defense, proportionately more people are killed accidentally (between 100-400% more, depending on the study you look at and the inclusion criteria). If you include accidental non-fatal shootings, the numbers are much higher. 

A significant portion of the anti gun-control lobby sincerely believes that if they were in a public-shooter-type situation they would be able to save the day or at least have a positive impact. This is pure fantasy.  While I don't doubt that a combat marine or special forces operative could "take down" a shooter with minimal "collateral damage", it is only because they have years of combat training

I don't care how good a shot you are in a shooting range (or on your Xbox). There is a massive difference between trying to shoot a stationary target vs a moving (suicidal) target that is trying to kill you. Without training you have no idea how you're going to respond. You may simply freeze up. You may panic and shoot everywhere. You will likely be trembling from either fear or adrenaline undermining your capacity to aim. In a crowded area, failure to have perfect aim may result in further deaths--the exact thing you're trying to prevent.

Trained professionals (military, police, various security agencies) go through thousands of hours of training to condition themselves to deal with high stress situations. I don't care how badass you are in your fantasies: if you aren't trained for using your weapon when someone is trying to kill you, you will not perform the way you think you will. You are either going to have no effect on the situation or you will end up shooting innocent people in your panic. Merely owning a gun does not a brave or competent person make. 

The typical argument you hear from True Patriots is that if everyone is armed in a public place, suicidal shooters won't be able to do the damage they do. This is only possibly true if the people carrying weapons are trained to use them in high stakes situations. It is not true if they have never been in any situation remotely resembling a live fire fight. 

So here's the proposal: Anyone who wants to carry a gun in a school or similar public place can do so provided they have the relevant training. This would include at minimum (a) a one time course that provides training equivalent to what a SWAT team, marine, or comparable operative receives for live fire-fight situations; (b) mandatory annual training refresher course. 

Regarding (a) the big political issue will be who decides how much training is enough? Easy. Ask the marines or some other respected military/police institution to outline what they consider to be the minimal training required to send someone into a live fire fight. After all, these institutions know better than anyone else what's required. That is to say, we let the experts rather than politics decide. (Crazy. I know.)

Regarding (b) the answer is the same. Knowing how to react effectively in a live fire situation is a perishable skill and so there must be retraining if we want gun-carriers to be more likely to help then harm a situation. After all, that's the goal, right?

Bb..bbut...it's my right. First of all, rights are not unrestricted. They are commensurate with capacities to exercise them. Second, no one is saying you can't go to shooting ranges, keep a gun in your house, or go hunting. This is about carrying a gun in a crowded public place. 

Although there may exist other arguments, the primary argument given for why fire arms should be permitted (or even encouraged) in public spaces is that True Patriots with guns will be able to diminish the casualty and fatality rates. This argument rests on a false assumption that mere possession of a gun is sufficient to, on average, accomplish this outcome. For the argument to have any chance at soundness a premise needs to be qualified. It isn't merely armed True Patriots that can reduce casualty and mortality rates in mass shootings but True Patriots with sufficient training. Otherwise, True Patriots will, on average, probably be a net liability or have no effect at all. 

Loose Ends
1. Gun control is not the same thing as the government coming to take your guns. Why do we always hear this straw man? 

2. As far back as the mid 90s, the NRA sponsored a law in congress (which was passed and repassed) that prevents the CDC from collecting and using any data on gun violence. The law reads that "[any data collected by the CDC] may not be used to advocate or promote gun control.” WTF?  If unrestricted gun ownership and lax gun control laws (and enforcement) really weren't related to higher gun death rates, as the NRA claims, then why does the NRA lobby against collecting the data? What are they so afraid of? If they're right, why not use the data to show it? And more importantly how can we have good policy without good data? To me the reasons are fairly obvious but I'll let you draw your own conclusions.

3. Gun control vs gun rights arguments usually go like this: 

    (a) War of empirical evidence. Continued disagreement for a variety of reasons having to do with biased interpretations and different quality of evidence being given equal credence, methodological disputes.

    (b) Normative: "Fine, I don't accept your studies and you don't accept mine. It doesn't matter anyway. Gun ownership is a right."  So, here's the thing I've always wondered...and if you are anti gun control proponent maybe you can let me know in the comments what you think.  Suppose the empirical data indicated fairly strongly that lax gun control laws (and enforcement) actually do lead to significantly higher rates of homicides and accidental deaths, ceterus parabus. Would you be willing to oppose gun control legislation despite the fact that more people will die and get injured? At how many preventable deaths per year might you change your mind? 100? 200? 1 000? Is there any number? Or is an unrestricted right to own a gun more important than any number of preventable deaths?  

Wednesday, September 2, 2015

Doing What Feels Right and Sartre's Existentialism

Existentialism can be summarized in one phrase, "existence precedes essence." But what does this mean? Pre-existentialism, philosophical systems presumed Man had an essence. To understand what that means consider a chair. Chairs don't just randomly pop into existence...no sir!  Before they are created someone or something has to have a concept of what a chair is (i.e., something for sitting), then the chair is created in conformity with the concept. The essence of a chair is that for which it is designed--which is sitting, if you didn't know.

Notice two things: (a) having an essence, in the sense I've described, implies a creator and (b) that the creation's essence is determined before it comes into existence. And so, artifacts--things like chairs, computers, cars, iphones, etc--all have creators and have an essence/nature/purpose before they come into existence.

Let's return to "existence precedes essence." For existentialists, human beings are unlike artifacts in that we exist before we have an essence/nature. We are not designed and so there is no predetermined essence that defines who or what we are. Who/what we are comes after we exist. We are thrust into the world and create ourselves through the actions we choose. This is what "existence precedes essence" means: first we exist, then we will acquire a nature (though our acts).  If I do harmful acts and act selfishly, then this is what I am. If I create and share, then this becomes my nature. There is no predetermined nature beyond what I actually choose to do. Again, contrast this with artifacts: first they have a nature/essence/purpose then they are brought into the world.

For existentialists, the human condition is (a) understanding that we are free to choose our own essence (through our actions) and (b) figuring out how we ought to create ourselves given we have no intrinsic nature. Another way of thinking about this is to say that we are responsible for who we are.

Radical Freedom and Responsibility
Although we have no common nature among us we all share the same conditions: We are "condemned" to be free thus we have to make choices about how to live and 'be'. There is nothing objective in the world to grab to guide us in our choices. Every choice is exactly that--our choice. And because every action is a consequence of our own choice, we bear absolute responsibility for it. Pretending you don't have a choice is what Sartre calls "bad faith". Bad faith is lying to yourself about the reality of your radical freedom. Even when you act in bad faith you cannot escape responsibility for your choices which define who you are. It was your choice to lie to yourself.

The existential condition is that there are no objective values to guide our decisions and no one but our selves to decide which values we adopt. Now some might say that we can turn to religion or authorites to guide us. There are a few problems with this. First, you are shirking your responsibility as a free being for decisions. By deferring to an outside source for your decisions your are denying your responsibility to choose for yourself.

One might say that choosing to follow this or that text or leader is a choice--and it is. But this doesn't allow you to escape the subjectivity of the human condition:  how you choose to interpret the various texts and advice will also be a matter of your own choice. There's no external source you can lean on to tell you how to interpret. You cannot escape the subjectivity of your existence and so you still must bear the responsibility of choosing one interpretation over another.

Furthermore, your essence is nothing more that the sum of the things that you choose to do. And if trying to deny your radical freedom and offload responsibility are part of your actions, then you are a coward. Anguish, for the existentialist, in part comes from knowing that he bears full responsibility for what he does and who he is and that this responsibility is inescapable.

Moral Choice
If there are no objective values in the world, it seems like anything goes. Again, just like with any other choice, the ethics that you choose define your essence. If you choose and act on a selfish ethic, then that's what you are and you are responsible for everything that comes from it.

But, in a sense, our actions aren't completely unconstrained. Here's where we exit subjectivity. Every choice that I make not only defines who I am but also defines the essence of Man as a whole because I am a part of that whole. This demands that I consider the consequences of my choices on what will be the nature of Man.

For every man, everything happens as if all mankind had its eyes fixed on him and were guiding itself by what he does. And every man ought to say to himself "Am I really the kind of man who has the right to act in such a way that humanity might guide itself by my actions? (Existentialism)

I am responsible, through my choices, for how Man is defined for my time in history because I am a part of Man.

But this is vague. How do I decide what to do in specific cases? Unfortunately, general moral principles can't tell us how to decide particular cases. Sartre give the following case:
[The young man's] father was on bad terms with his mother, and, moreover, was inclined to be a collaborationist; his older brother had been killed in the German offensive of 1940, and the young man, with somewhat immature but generous feelings, wanted to avenge him. His mother lived alone with him, very much upset by the half-treason of her husband and the death of her older son; the boy was her only consolation. The boy was faced with the choice of leaving for England and joining the Free French Forces--that is, leaving his mother behind or remaining with his mother and helping her to carry on.
He was fully aware that the woman lived only for him and that his going off--and perhaps his death--would plunge her into despair. He was also aware that every act that he did for his mother's sake was a sure thing, in the sense that it was helping her to carry on, whereas every effort he made toward going off and fighting was an uncertain move which might run aground and prove completely useless; for example, on his way to England he might, while passing through Spain, be detained indefinitely in a Spanish camp; he might reach England or Algiers and be stuck in an office at a desk job. As a result, he was faced with two very different kinds of action: one, concrete, immediate, but concerning only one individual; the other concerned an incomparably vaster group, a national collectivity, but for that very reason was dubious, and might be interrupted en route. And, at the same time, he was wavering between two kinds of ethics.
On the one hand, an ethics of sympathy, of personal devotion; on the other, a broader ethics, but one whose efficacy was more dubious. He had to choose between the two. Who could help him choose? Christian doctrine? No. Christian doctrine says, "Be charitable, love your neighbor, take the more rugged path, etc., etc." But which is the more rugged path? Whom should he love as a brother? The fighting man or his mother? Which does the greater good, the vague act of fighting in a group, or the concrete one of helping a particular human being to go on living? Who can decide a priori? Nobody. No book of ethics can tell him. The Kantian ethics says, "Never treat any person as a means, but as an end." Very well, if I stay with my mother, I'll treat her as an end and not as a means; but by virtue of this very fact, I'm running the risk of treating the people around me who are fighting, as means; and, conversely, if I go to join those who are fighting, I'll be treating them as an end, and, by doing that, I run the risk of treating my mother as a means. (Existentialism)
And so a general ethical principle can't help us decide specific cases. And besides, even if we do choose a general ethical principle, there is no guide to tell us which to choose as either a general guide of action or for particular cases. We also have to make that choice and we bear responsibility for it and accept that it now defines us in so far as we act on it.

Sartre and Emotions
General ethical principles can't tell me what to do in particular cases. Maybe I ought to do what feels right to me. If the young man feels that his love for his mother is great enough to sacrifice his other desires, then he should do that. But if the feeling of love for his mother isn't enough to make him give up everything else, then he ought to leave.

But how will he know that he loves his mother enough to give up everything else unless he actually does it? To know that his feeling leads him to the right choice he has to live that choice. He has to see how it plays out. He might try it and find out that the desire to avenge his brother overwhelms him and he's unhappy staying behind and so the feeling misguided him. But he can't know this before he lives it. The feeling can't tell him in advance what to do.

Consider another case that many can relate to. How do you know if marrying someone will be the right choice? Consulting your feelings before you're married can't tell you. You'll only know if it's the right choice if you actually do it. If it turns out well, it was the right choice. If it turns out poorly, it wasn't. The feeling can't tell you what's going to happen and so can't tell you in advance what the right thing to do is. Besides, if feelings were a good guide to marriage, we'd expect the divorce rate to be substantially lower.

It Isn't All Doom and Gloom
So, we are hurled into the world, condemned to be free with no fixed points to guide us in how we ought to live, yet we are somehow responsible for everything we do and are. This is the existential forlornness and anguish.  Forlorn because we cannot turn to anyone to make decisions for us and anguish because of the tremendous responsibility that comes with creating both our own essence and that of Man.

But chin up, butter cup! The good news is that existentialism is a philosophy of action. You might not have any guides but you get to create yourself--with every choice. Bonus, if you didn't like where things were going, you can reinvent yourself at any moment, if you so choose.
There is no reality except in action. Man is nothing else but his plan; he exists only to the extent that he fulfills himself; he is therefore nothing else than the ensemble of his acts, nothing else than his plan. (Existentialism)
Before you go skipping off into the sunset, existentialism is a demanding philosophy if you take it seriously. In creating yourself and defining Man, with every act you are saying "this is what I think all mankind ought to be--follow me!" When taken seriously, that is a massive responsibility to bear (hence, all the angst).

It also means that there is nothing that exists that is not expressed in action. There are no great authors with unwritten great books, no charitable people with kind deeds undone, no great loves who have not loved.  "Reality alone counts and [...] dreams, expectations, and hopes warrant no more than to define a man as a disappointed dream, miscarried hopes, as vain expectations" (Existentialism).  In other words, coulda woulda shoulda is worthless. At the end of the day, what matters is what you did with your life and whether it was an example for others to follow.

Thursday, August 6, 2015

Cilantro, Moral Truth, and Justification

I've been working on this paper for longer than I care to admit but I have to turn it in at some point. I've written about 4 or 5 different versions of it all with different solutions or non-solutions to the puzzle I present. Anyhow, a few notes:

(A) For some reason the footnotes didn't post to the blog so some of my clarificatory points aren't here. Here are two of the important ones. 
     (1) The anti-realist position I'm concerned with is error theory (there are no moral facts and moral propositions have cognitive content). 
     (2) In the last section I talk a lot about "evidence". What counts as evidence in moral arguments would require its own paper so I make some assumptions about what counts: moral judgments, intuitions, principles, and emotions. I'm happy to include other candidates. 

(B) For non-philosophers all you really need to know to understand this paper is what moral anti-realism is. In the simplest terms it's the view that there are no moral facts. Everything is, like, just your opinion, maaaaaaaan!

Cilantro, Moral Truth, and Justification
Appetizers: Anti-realism About Gustatory Facts 
At dinner tables around the world, there is perhaps no issue more divisive than whether cilantro is or is not a delicious garnish. It is the gustatory domain’s own abortion debate. There’s little middle ground and each side views the other as irredeemably wrong. In more reflective moments, most would agree there are no objective facts about the deliciousness of particular foods.  Abe can claim that cilantro is objectively delicious while Bob claims that cilantro is objectively disgusting but the fact of the matter is that there is no fact of the matter! Granting this assumption, is there any way that we could make sense of the idea that either Abe or Bob’s belief is better justified than the other? 

For the moment, I’m going to assume that there isn’t. It seems as though Abe and Bob could offer justifications for why cilantro is subjectively delicious or disgusting but I doubt any of these reasons would convince a third party of cilantro’s objective gustatory properties. Abe and Bob could insist that their arguments and reasons support objective gustatory facts but we’d dismiss their claims as category mistakes—they’re confusing their personal preferences for objective facts about the world. Any argument they give for objective gustatory facts about the world is better interpreted as facts about their subjective gustatory preferences being projected onto the world.

Now consider an analogous moral case and substitute your favorite moral propositions and their opposite for the gustatory ones. For example, Abe claims that it is an objective fact that slavery is a morally good institution while Bob claims the opposite—i.e., that it is an objective fact that slavery is a morally bad institution. If in the cilantro case anti-realism about objective gustatory facts leads us to accept neither competing belief is better justified than the other then it seems that consistency requires that anti-realism about moral facts lead us to also conclude that neither Abe nor Bob’s beliefs regarding slavery is more justified than the other.  Just as there are no objective facts about the deliciousness of cilantro, there are no objective facts about the moral badness or goodness of slavery, and so one position cannot be more justified than the other. Any argument is merely a projection of the interlocutor’s personal preferences or explainable by appeal to facts about their psychology.

There may be some extreme anti-realists out there that are willing to bite the bullet and concede the point.  However, I’m willing to bet that many anti-realists would deny that all moral beliefs are equally well-justified even if moral beliefs can't be objectively true or false. If I'm right then these anti-realists need an account of justification that doesn't depend on the notion of truth. Is this possible?

The framework for this paper is to examine the relationship between moral anti-realism and justification. Suppose we accept that MORAL ANTI-REALISM IS TRUE: There are no objective moral facts. On what bases can we then evaluate competing moral claims? Is justifying objective moral claims analogous to trying to justify objective gustatory claims? That is, since there really are no facts of the matter, one claim is just as well-justified (or unjustified) as the other. The puzzle for the anti-realist is to reconcile the following two assertions: MORAL ANTI-REALISM IS TRUE with NOT ALL MORAL CLAIMS ARE EQUALLY JUSTIFIED.  I will argue that, if we adopt an externalist theory of justification, a Peircian fallibilism offers a potential solution to the puzzle. Before proposing my solution, I will consider and evaluate other attempts to reconcile the two assertions.

A Quick Word on Theories of Justification
Before proceeding we’re going to need to take a brief look at theories of justification and pare down the scope of my inquiry.

Two Theories of Justification
One way to analyze the concept of justification is along internalist/externalist lines. Internalists argue that a belief is justified so long as the believer is able to provide some sort of argument or supporting evidence when challenged. Externalists argue that a belief is justified if it was generated by a reliable belief-forming process, where "reliable" means that the process generates more true than false beliefs in the long run. For example, beliefs formed by visual perception are justified on the externalist view because visual perception generates more true beliefs than false beliefs in the long run.  So, my belief that there is a computer in front of me is justified because it was formed by my seeing it—which is a reliable process. It’s more likely that I’m actually seeing a computer than I am hallucinating it. 

I wish to side-step taking a definitive position on the internalist/externalist debate and suggest that both types of justification are plausible in ethics. We think that a moral belief is justified via normative reasons (internalist) but we also think particular ways of arriving at moral beliefs confer justification. For example, we think that moral judgments arrived at through careful reasoning and/or reflection are more justified than those produced by unreflective emotional knee-jerk reactions. And so it's plausible to think the reliability of the process that produces a moral belief is at least partially relevant to the belief's (relative) justification. For the remainder of this paper, I will grant myself that assumption and constrain the scope of justification to externalist justification. A full treatment of an internalist model in regards to my inquiry requires a paper unto itself—although I suspect internalist theories may have similar problems

Round 1: Externalist Justification Isn't Possible if There Are No Moral Facts
The simple argument against the possibility of an anti-realist account of externalist justification goes something like this. 

P1. Reliability is cashed out in terms of whether a process produces more true than false beliefs in the long run.
P2. Anti-realists deny that moral propositions can be true or false.
C1. So, there's no way to evaluate the reliability of a process when it comes to moral beliefs because the very attributes that we require to measure reliability aren't available.
C2. So, on the anti-realist model all moral beliefs are equally justified (or unjustified).

In short, anti-realists deny the very attribute (truth) required to measure reliability. If we can't know which processes are more reliable than others there is no externalist ground to say one moral belief is better justified than another.  But, again, surely some moral beliefs are better justified than others…but how? 

Consider the inference rule modus ponens . We know modus ponens to be a reliable belief-forming process from using it in other domains. It is content neutral. Its reliability credentials have been checked out so all we need to do is import it (and similar content-neutral processes) into the domain of ethics. Anti-realists can say that moral conclusions arrived at through modus ponens (or other combinations of formal rules of inference) are more justified than those that aren’t. 

Modus ponens and other valid argument structures are contingently reliable processes. That is, if the inputs (i.e., premises) are true then so too will be the outputs. The problem is that the anti-realist has denied the possibility of true inputs in the moral case. If inputs can be be neither true nor false then the conclusions are also neither true nor false. And worse yet, the same argument structure can yield apparently contradictory outputs.

Consider the following examples:
Ethics 1
1E. If you abort a fetus it is wrong. (Neither true nor false).
2E. I aborted a fetus. (Neither true nor false).
3E. Therefore, what I did was wrong. (Neither true nor false)

Ethics 2
1E*. If you abort a fetus it isn't wrong. (Neither true nor false).
2E*. I aborted a fetus. (Neither true nor false).
3E*. What I did wasn't wrong. (Neither true nor false).

It appears as though importing valid argument structures into ethics doesn’t give a solution to the puzzle of reconciling MORAL ANTI-REALISM IS TRUE with NOT ALL MORAL CLAIMS ARE EQUALLY JUSTIFIED.

Round 2: Content-Generating Processes
Perhaps the problem here is the content neutrality of the above processes. We need processes that justify initial moral premises as well as yield conclusions. We have some familiar plausible candidate processes that might confer justification: reflective equilibrium, rational reflection, rational discourse, coherence with existing beliefs, idealizing what a purely rational agent would want, applying the principle of impartiality, universalization, to name a few.

Notice also that we think that some cognitive processes for ethical judgment don’t confer justification—i.e., are unreliable. For example, if I form a moral judgment when I'm extremely angry I might come to reject that belief once I calm down and employ one of the above methods instead. So, a belief arrived at as a consequence of a temporary acute emotional reaction is not well-justified.  

Moral psychology and social psychology are littered with experiments where either the subject or the environment are manipulated to produce judgments that the subject would reject upon learning of the manipulation. This seems to hint at an answer: Beliefs that have been formed by processes that involve manipulation of the environment and/or the subject’s mood are less reliable/less justified than processes that don't involve any obvious manipulations. 

The Same Problem?
This account implies that some processes are more likely to "get it right" than others. But what is it to “get something right” if there's no target for epistemic rightness (i.e., truth)? This seems to be the same problem from Section 1 all over again. If moral beliefs can neither be true nor false, in what way can we say that one process yields more true outputs than another? Sure, we might systematically reject judgments produced by some processes in favor of others, but why  prefer the outputs of one process over another if we can’t say that judgments from one is more likely to be true than another? The grounds for thinking judgments from one process are more justified than those from another seem to be that the judgments from one process are more likely to be true than another.

Round 3: Analogy with Scientific Justification and Fallibilism as a Possible Solution
Harmon argues that there is an important disanalogy between explaining ethical judgments and scientific judgments. We can’t explain why a physicist believes there is a proton without also postulating that there is something “out there” in the world causing the physicist’s observation, which is in turn interpreted as a proton. A moral judgment, on the other hand, can be explained merely by appeal to a subject’s psychology. We needn’t postulate any thing or moral properties “out there” that cause a subject to have a moral belief that x. 

Let’s accept Harmon’s argument. Despite the fact that the causes of scientific and moral judgments might differ, there may be ways in which justification functions similarly between both.
Scientists habitually couch their arguments and conclusions with language that is fallibilist.  Claims and conclusions are presented as provisional, based on the current available evidence and methods. The history of scientific discovery is one of revised and overturned conclusions in light of new evidence and recursive, self-correcting improvements in the scientific method itself.  

From the point of view of internalism vs externalism about justification, we might consider new data as a kind of internalist justification for a claim because they are reasons to believe one thing rather than another. Research methods, on the other hand, can be viewed as instances of the externalist’s processes that confer justification. The idea that some processes are more reliable than others is a familiar idea in scientific research. Claims that derive from methods (i.e., processes) that avoid possible known biases are more reliable and hence more justified than methods that don’t.

For example, the placebo effect is a well-known occurrence in medical science. If patients think they are receiving treatment—even if they aren’t—patients report subjective measures (e.g., reduced pain/discomfort) significantly more positively than non-treatment (i.e., control) groups. We also know that if the researcher knows which patients are in the treatment group and which aren’t, this can influence both the way the researcher asks the patient questions and how they interpret data (they’ll bias toward a positive interpretation). For these reasons we think that the results from medical research that are placebo-controlled and double-blinded are more reliable than those that aren’t. 

In short, data from a study that employs a more reliable process (e.g.,. double-blind, placebo controlled) is more justified than the data from a study that didn’t do either of these things. The more a process avoids known errors, the more justified its conclusions—despite the fact that the blinded study’s conclusions might also eventually be overturned. There is always the background understanding that new and better methods might come about and generate incommensurate data and/or conclusions but this doesn’t undermine the current relative justification that the current methods confer on the outputs beliefs. 

Analogously, moral and social psychology has produced a vast literature showing all the ways we think our moral thinking can go awry.  We know that a cluttered desk can cause us to pass harsher penalties than we would otherwise, that a temporary heightened emotional state greatly influences our judgments, that the feeling of empathy can lead us awry,  and that implicit biases can play important roles in our judgments—to name a few. In short, there are many ways in which it seems like both our basic hard-wiring and various forms of personal and environmental manipulation can cause us to make judgments that we, upon learning of the manipulation or the bias, would likely reject in favor of a judgment arising from unmanipulated deliberation or one of the familiar gold-standard methods of moral reasoning .

Perhaps an anti-realist can think of the activity of moral thinking not as one that aims at discovering some objective truth, rather one that seeks to avoid known cognitive errors and insulate against manipulation. In so far as our judgments derive from methods that avoid (known) cognitive errors and biases, our moral claims are better-justified. This however doesn’t entirely solve the puzzle. We still need to answer why we should choose the output of one type of process over another.  It’s easy to say that the manipulated judgment is “wrong” or is “mistaken” but how do we say this without appeal to truth? “Error” implies a “right” answer. One might just as easily say that the correct judgment is the manipulated or biased one and we are systematically mistaken to adopt the reflective judgment made in a cool dark room. 

The Main Challenge to The Anti-Realist: The moral anti-realist needs some general criteria to explain why we (ought to) systematically endorse judgments from one process rather than another. That we do isn’t enough. We must give an account of why one confers more justification than another.

Peirce, Methods of Inquiry, and The “Fixing of Opinion”
Before suggesting a possible criteria to answer the challenge, I want to sketch out a Peircian analysis of methods of inquiry since it inspired my suggestion. I’ll fill in other details later as they become relevant. Pierce argues that the “fixing of opinion” or “the settlement of opinion” is the sole object of inquiry (Peirce, p. 2-3).  While we needn’t commit to Peirce’s exclusivity claim regarding the purpose of inquiry, Peirce provides useful insight into why we might think some processes confer greater justification than others. I take him to be proposing two related desiderata for our methods of inquiry: that they produce beliefs that are (a) relatively stable over time and (b) relatively impervious to challenge.  The anti-realist can distinguish between processes’ justificatory status to the degree that they they achieve (a) and (b).

This possible anti-realist response to the challenge of justification is not without difficulties. I will explore two related ones. First, it isn’t clear that stability of beliefs is a condition for justification. As standard examples of recalcitrant and dogmatic Nazis or racists show, in fact, we have reason to believe stability might have little to do with justification. Second, I will need to show that stability marks something we think matters to justification: namely, the absence of cognitive errors and inclusion of all relevant evidence. This is the fallibilist aspect of the proposal: Stability needn’t be a proxy for truth tracking, however, it is a reasonable proxy for believing that we are avoiding errors. This is part of the answer but not all of it. 

Peirce notes, stability can be achieved various ways—not all of which we’d think confer justification. The second part of this fallibilist model of justification has to do with the degree to which a process excludes relevant evidence thereby making it susceptible to challenge. Thus, for Peirce, long-run stability requires that the method of inquiry take into account all relevant sources of evidence. A method that excludes forms of evidence will produce beliefs that are more likely to eventually be overturned or require that people be unable to make inferences from one case to another or from one domain to another.  Piece compares 4 methods of inquiry, which aim to produce stable beliefs. In so doing he illustrate how this second criteria (i.e., imperviousness to challenge) works with the first (i.e., stability) to produce a theory of justification.

To achieve stability of belief in the method of tenacity one dogmatically adheres to his chosen  doctrines by rejecting all contrary evidence and actively avoiding contradictory opinions. Coming into contact with others, however, necessarily exposes him to different and conflicting views. This method can’t accommodate ever-present new data without giving up on rationality and the ability to make inferences. Unless we live like hermits, the “social impulse” against this practice makes it unlikely to succeed. The method of authority is the latter method but practiced at the level of the state and with the addition of social control. This method fails to achieve long term stability in that it cannot control everyone's thoughts on every topic. This isn't a problem if people are unable to make inferences from one domain of inquiry into another, but to the extent that they are, stability will be undone and doubt will emerge.

The above two methods have an important common feature in regards to how they achieve stability. In both cases, when I need to decide between two competing beliefs, the method tells me I ought to pick the one that coheres with what I already believe. In other words, there will be cases where I will have to “quit reason” by rejecting contrary evidence and inferences. The above methods for arriving at beliefs systematically exclude relevant evidence in generating beliefs. 

Now, compare these methods with something like wide reflective equilibrium or rational discourse. With these methods, how do we determine what to believe?  Rather than exclusively referring back to what we or the state/religion already endorse, when these methods confront new evidence they adjust output beliefs accordinglyIn the long run, the supposition is that (for example) reflective equilibrium and rational discourse will lead to more stable beliefs than the above two methods because the outputs include the best available total evidence rather than reject it.

Reply to the First Challenge
Let’s restate the first challenge: Stability on its own doesn’t seem to confer justification. There are many way methods of inquiry by which we might achieve stable beliefs, not all of which confer justification. When we defend stability as a justificatory property, the concern is that we’re begging the question: when a process generates stable beliefs that we approve of, we think stability is good; when it generates beliefs we disagree with, we think stability is bad.

To reply to the challenge, let’s consider, for example, reflective equilibrium. With (wide) reflective equilibrium we suppose that by finding an equilibrium between everyone’s principles and considered judgments, in the long run we arrive at a view that no one could reasonably reject (because it also takes their views into account). Stability, on this method, arises as a consequence of taking everyone’s principles and considered judgments into account; i.e., it doesn’t obviously exclude any evidence that might, in the long run, diminish the stability of the beliefs the method produces. And so the critic of stability has a point that stability on its own might not confer justification. What matters is why the view is stable. It’s assumed that a moral view derived from processes that take into account competing view points are stable for the right reasons: they are less impervious to challenge. And they are less impervious to challenge because they don’t exclude evidence (widely construed).

Reply to the Second Challenge
The second challenge is to give positive reasons of thinking long-term stability confers justification. When we make knee-jerk judgments or manipulated judgments we end up with beliefs inconsistent with our other beliefs, and so we have reason to conclude we’ve made an error somewhere—either in endorsing an inference, a judgement, a general principle. Conversely, with a process like reflective equilibrium or rational discourse we eliminate inconsistencies in the long run. By proxy, we’re also eliminating errors in either our inferences, judgments, or general principles; i.e., things that undermine justification. In the long run, the supposition is that, some methods of inquiry (e.g., reflective equilibrium) will lead to fewer and fewer errors, in turn contributing to more and more stable beliefs.

This also helps answer a problem I raised in the first section of the paper.  A reliabilist account of justification has truth built in and so doesn’t help an anti-realist explain how following a valid argument structure can confer justification. If the inputs don’t have a truth value, then neither will the outputs and so one output is just as justified as another. A fallibilist approach provides a solution. Failure to follow a valid argument scheme undermines justification because it indicates an error in reasoning. And so, deliberative belief-forming process that don’t follow valid schemes are making errors and in that respect their outcomes are less justified than those that do. 

Elimination of errors is part of the answer to why we think beliefs derived from one process are better justified than beliefs derived from another. Long-term stability of output beliefs is partly a proxy for absence of errors and so, to the degree that a process generates stable beliefs we have reason to think those beliefs are less likely to contain or be the product of errors

The second part of the answer is similar to the reply to the first challenge: Processes that attempt to exclude classes of evidence or deny certain inferences aren’t going to be stable. There’s an analogy with science: a research method that regularly has its conclusions overturned because it fails to take into account certain sources of evidence is a process that generates beliefs that are less stable in the long run. The beliefs are less stable because important classes of evidence aren’t taken into account or controlled for (for example, various cognitive biases). Similarly, a moral reasoning process that fails to take into account certain sources of evidence (e.g., competing arguments, the fact that we are prone to certain cognitive errors) is also going to generate beliefs that less stable, and by extension, less justified.

The puzzle for the anti-realist is to reconcile commitment to no moral facts with the view that some moral beliefs are more justified than others. If we take an externalist account of justification, a Peircian fallibilism offers a possible solution to the puzzle. Why should we think the the outputs of one process are more justified than another if their output can’t be true? Some processes generate beliefs that are relatively more stable and impervious to challenge in the long run than do other process. Stability, on this model occurs as a consequence of taking into account all the relevant data and avoiding cognitive errors in generating output beliefs. By doing so, outputs are less likely to be overturned by excluded evidence. Stability also is a proxy for absence of error. If a process produces beliefs that are systematically over-ridden, it must be because the outputs are inconsistent with other judgments, beliefs, or inferences. Processes that systematically generate inconsistencies indicate errors, which in turn also undermine stability. 

I’d like to close with the following thought experiment. Suppose both realists and anti-realist agree on which processes confer greater relative justification than others. Would the realism/anti-realism debate matter much? Aren’t comparatively well-justified beliefs (and actions) what we’re really after in ethics?