Thursday, August 21, 2014

The Impoverished State of the American Campfire and How to Build A Fire Properly

When I was in Japan what drove me (and most other Western foreigners) loco was the Japanese obsession with procedure and protocol. There's a correct way to do everything in Japan and if you don't do it that way, the outcome is often next to worthless--or at the very least the cause of furrowed brows.

Although true of everyday life, the preference for procedure over product is most pronounced in those things that are most definitive of Japanese culture. Take something as simple as making tea. There's a several-hour ritual just for making a freakin' cup of tea for God's sake! There's even a right and wrong direction to stir the tea.  Another example would be in judo: In Japanese judo (unlike the judo of many other countries) there is a heavy emphasis on the aesthetics of the throw. It's not enough to just throw your opponent for the ippon (full point). The throw also has to be pretty. How you perform the throw is just as important as throwing your opponent. Examples abound but I think you get the picture; and besides, at this point you must be wondering why I'm taking about the Japanese preoccupation with process when this post's title is to do with fire and America.

I'm talkin' 'bout J-pan in order to contrast it with its polar cultural opposite: 'Murica. To the extent that the Japanese emphasize procedure, Americans prize outcome. “We don't care two hoots how you do it, just get 'er done!” Only in America could duct tape be a solution to everything.

So, what's all this got to do with building a fire? I'm glad you axed. On my camping trip across the US of A, I noticed something that bothered me: The way Americans “build” a campfire.

The American camper, in his native habitat, puts his firewood into a pile, pours gasoline on it, then "drops a match on that bitch" (Wut! Wut!). Boom! Instant campfire: no fuss, no muss...and most importantly, no waiting. “I want a campfire, and I want it now! Get 'er done!” (high five's all around)

Call me a purist or perhaps a luddite, they might be one and the same, but I think important things are omitted when your method of starting a campfire is to simply douse some logs with gasoline. “But you said you wanted a fire, didn't you? So, I made one”

Yeah, I get it. The outcome's the same but I maintain that something's amiss.

For a while I couldn't figure out why I was so bothered by this practice. I mean, why should I care? It's just a freakin' camp fire and not even mine, for that matter. Last night, the answers came to me.

For starters, beyond avoiding singed eyebrows, there's no skill or art to the American way of making a campfire. There's something to be said for the skill and patience it takes to build a good campfire “old-school.” From gathering and arranging the right sized twigs, to nursing the flame in the early stages, to knowing when to add larger pieces and when to just let the fire breath. This is knowledge and skill that must be acquired through repeated experience that is usually shared and passed on by an early mentor...which leads me to the next point.

There's a social component to building a fire the “slow” way. Usually, in a family, the young children are sent out to gather twigs and sticks as kindling while the older children/teenagers get to wield the ax to split wood. One or two lucky children get to be the ones to use the matches to light the base of the fire. The parents coach the children in arranging and lighting the twigs “just right” as a skill is passed from one generation to the next. As the children progressively get older they get the “privilege” of graduating to and learning new fire-building tasks. These moments of interaction are precious. The fire is a symbol of learning and shared labor and its warmth is enjoyed all the more because each member of the group contributed in some way.

Think about it. Besides language-use, is there any other skill that is more quintessentially human than building a fire? The American method of fire-building method literally breaks the inter-generational line of this skill's transmission that is intrinsic to our human-ness. The proverbial torch is quite literally not passed on to the next generation.  It deprives subsequent generations from learning and the current generation from passing-on a skill that was shared by virtually every single one of our ancestors. One more experience that ties one generation to the next is lost. The American fire-building practice disconnects and alienates us from some of the most important shared experiences of our ancient human ancestors from our collective past.

All these goods "go up in flames" when, in building a fire, there is no regard for protocol and all emphasis is placed on outcome.


Or maybe I'm making too much of all this... Besides, perhaps if some isolated aboriginal group saw how I start my fires with a match they might roll their eyes at me for foregoing all the social good that comes from frantically rubbing two sticks together to get the initial heat to light the wood shavings and dry grass...

Maybe we should all bring a canister of gasoline when we go camping.

But probably not.

Monday, July 21, 2014

Why Epistemology Matters: Reason Number 2

Introduction
A while back, in an attempt to assuage feelings of doubt, I wrote a post on why the main issues in epistemology matter to Joe and Joanne Shmo.  Here, I will address why what appears to be an insignificant esoteric and abstract issue in epistemology has extremely important consequences to our daily lives and especially our social institutions.

Before reading what I have written click on the link and watch the video:


I said click it!

Ok, so if you didn't have a chance to watch the entire video here is the most relevant point:

72% of wrongful convictions that are later overturned by DNA evidence are a result of eyewitness testimony.

Overview

Let's get the terminology out of the way.  Epistemology is the study of knowledge.  Two of the major questions that are explored in epistemology are (a) what is knowledge? (b) what is it possible to know? (i.e., what are the limits of knowledge).  Apart from these central issues, there are other important issues one of which I mentioned before: when is a belief justified?

It shouldn't come as a surprise to any thinking person that some beliefs are well-justified and others are not so much--or not at all.  So, when is a belief justified and when isn't it?   Establishing some rules or principles for justification before we go out into the world will allow us to avoid falling prey to ideas and beliefs that might be appealing (i.e., confirm our biases) yet are not well-supported.  

Consider Jennifer Thompson's testimony.  Pretty convincing right?  She has pretty convincing support for her belief about who raped her.  If you think that the reasons she cites for her beliefs are strong justification then you are subscribing to a theory of justification called internalism or "current time-slice" theory.

Don't be scurd by the fancy name.  All this means is that for a belief to be justified, the believer or proponent has to, in principle, be able to--at the moment of inquiry--produce some sort of compelling evidence or support for their belief. Otherwise stated, a justifying reason for the belief must be internally accessible to the believer.  In Jennifer's case, maybe it's a memory or are current observation.  Justification for moral beliefs might be an appeal to a commonly held moral principle, for a scientific belief it might be an observation or an inference.  Regardless, the main point of internalism is that the justification for a belief must be accessible to and produceable by the believer.

An externalist on the other hand says that justification has nothing to do with conscious access to reasons and everything to do with the reliability of the process by which the belief was acquired. "Reliability" in this context means "the process by which the belief was acquired produces more true beliefs than false beliefs over the long run."  The externalist or "process reliabilist," as they are sometimes called, doesn't care two hoots about the justifying reasons to which Jennifer has conscious access.  They only care about whether the process (eye-witness testimony, in this case) by which Jennifer acquired her beliefs produces more true beliefs than false beliefs in the long run (i.e., that the process is reliable).

Now, lets consider Jennifer's testimony from the point of view of an externalist.  We know that the process of eye-witness memory and testimony is unreliable, in fact, it's very unreliable.  We know that in the long run, more beliefs that are formed from these processes will turn out to be false than they will turn out to be true. So, if we adopt an externalist model of justification, we ought to reject Jennifer's testimony.  It isn't a justified belief.

Adopting process reliablism as our theory of justification tells us we ought to focus our attention on the reliability of processes rather than reasons.  In the video clip we learn of all the different ways in which the (current) process of eye-witness testimony can fail and how it might be corrected.  It seems counter-intuitive to say we should dismiss Jennifer's testimony, however, by focusing on processes rather than reasons as justification, we can avoid tragic errors.  

Our preferred philosophical theory of epistemic justification has huge practical implications in many domains, and it is made most readily apparent in the court of law.  As a thought experiment, think about the practical consequences to your area of expertise of applying one theory of justification vs another. You'll likely find there are important consequences.

Epistemology matters--and not just to philosophers.  The theory of justification we choose can have implications as to whether people are likely to wrongfully convicted.

Boom goes the dynamite.

Thursday, July 10, 2014

Teaching Philosophy and Truths To Live By

Dedication:  This article is dedicated to my first philosophy professor Prof. Gordon (Langara College) as well as Prof. Sommers (UH) whose courses caused me to lose many hours of sleep for all the right reasons and whose teaching I try to emulate.

Introduction
Here are some of my thoughts on teaching philosophy.  Later in the post I make some comments on elements that I think make a great course.  I want to be clear that I don't think I've achieved them but they are part of what I think I ought to be striving for.  Also, I realize it's probably presumptuous and naive to think that after only 2 years of teaching philosophy I can make any pronouncements on the subject, but fuck it. Here goes.

What Makes A Good Intro Philosophy Class
A good intro philosophy class causes mental anguish.  The anguish happens because the students embark, willingly or otherwise, on an intellectual journey where many of their beliefs that seemed obviously true become uncertain.  Their most basic beliefs about the nature of the universe, the mind, moral responsibility, right and wrong, good and bad are all shown to be on shaky foundations--if on any at all.  And the foundations are not just shaky because of some fanciful and implausible Matrix-like scenarios.  The foundations are weak for good reasons that they often can find no way to reject.

I'm not just talking about beliefs that are a consequence of a sheltered existence of which young students will be disabused later in life.  I'm talking about beliefs that the majority of people all over the world consider to be self-evident.

The anguish grows throughout the course as common sense belief after common sense belief is shown to be untenable.  The uncertainty this creates can have 2 effects; not necessarily mutually exclusive.  In fact, the mark of a great intro course is that the 2 effects are not mutually exclusive.  One effect, I have mentioned already: anguish.  How is it possible that everything that seemed so evidently true is now so evidentially unjustified?  What should I believe? The progressive disintegration of certainty and its inevitable replacement with uncertainty is unsettling  The second effect is a thirst to know more in hopes to reclaim some of the lost certainty.

Each unit has the following arc.  I begin with the destructive phase.  We read an article that definitively undermines the common sense view.  The natural reaction is to reject the conclusion in order to hold onto what they had previously thought self-evident.  The problem is that they can't find fault with the argument.  Together, we go through each premise and try to find a way out, but alas, there is a powerful counter-reply for every objection.  The anxiety begins but hopefully that's not all. 

Humans have an aversion to uncertainty.  The justifications for my students' beliefs have just been decisively undercut and have been replaced with a well-justified unpalatable alternatives.   But they don't want to accept this alternative.  There must be a way out.  This longing for a way out is what instigates the desire to learn more.  Maybe this next reading offers a way out.

The next reading is presented and discussed in class.  Initially if conforms better with their intuitions. Then we start to explore some of the objections and hidden consequences.  The objections are valid. The consequences undesirable.  We need a new answer.  "You'll find it in your book right after the previous reading.  Read it tonight and we'll discuss it tomorrow."  

We repeat the process.  The anxiety starts to grow again.  

"We need an answer!  Just tell us the answer!"  

"What's the point of philosophy? Nothing is ever solved! This is stupid."

Frustration is setting in.  We want our certainty back.  It's so much more comfortable.

Here is what separates a good philosophy class from a great one.  A good one whips the students into an existential frenzy and shows them that the universe is not the simple place they thought it was.  They learn that it's not as easy as it initially appeared to ascribe moral responsibility and blame and that these concepts might not ever be applicable as we typically use them; that neither dualism nor physicalism give us entirely satisfactory answers about the nature of mind and consciousness; that what is good and what is right can come apart and there is no clear way to choose between the two; that your beliefs can sometimes be justified without you knowing they're justified; that a belief can be justified and true yet still not count as knowledge; that even if there is a god he/she/it probably doesn't have the properties we ascribe to him/her/it.  Most importantly, they learn to withhold assent until they've thoroughly thought through the consequences of an argument--no matter how much it accords with what they want to be true.

Humans hate uncertainty.  It bothers us and we avoid it when we can.  A great philosophy course distinguishes itself from a good philosophy course in that it engenders hope that some form of progress on these issues is possible.  Philosophy is not a fun house of theories posing as illusions of truth.  We can move forward but progress is slow.  Patience and careful thinking are necessary virtues.  If you don't have them, you must develop them or you will be in for a frustrating experience.

There is an analogy with science I offer my students:  No one ever throws their hands up at "science" and says "look at all those theories that didn't work out, science can't answer anything!" Science is a wasteland of discarded theories which were all at one time considered "true" or at least well-supported.  The idea is that we don't approach truth directly but we circle in on it by discarding what is not well-supported and keeping whatever is.  Approaching truth is a slow and difficult process, not a one-off affair.

At any given moment on most issues in science there are competing theories.  The presence of competing theories isn't evidence  that there's no answer to the issue nor that both theories are of equal merit.  As more and better evidence comes in, one theory will become untenable while the other one continues to have no obvious reason to reject it.  We discard the former and keep the later.

The same applies to philosophy.  The history of philosophy is a wasteland of theories on every philosophical issue--alive and dead.  This is evidence of progress.  We have found reason to reject many theories.  The ones that remain are well-supported given our current epistemic position.  When arguments and evidence become available to reject one over another as has happened throughout the course of both scientific and philosophical history, progress will have been made.

This raises a problem that Descartes recognized:  How do we know in advance which theories will eventually turn out to be true and which will turn out to be false?  The answer, of course, is that we can't.  And this is what motivated Descartes' method of radical doubt. Rather than build a foundation of scientific knowledge on theories and facts that may one day collapse under the weight of new evidence, he treated anything that could possibly turn out to be false as false.  Now, we'll only build our edifice of knowledge on facts and theories that will never falter.   There, problem solved.  

Or are we just left with another problem?  If we reject everything that that could possibly turn out to be false, no matter how remote the possibility, what are we left with?  I won't spoil it for you but the answer is not much.  Not enough to rebuild a scientific account of the world, anyway.

You might complain that philosophy doesn't tell us what's true it only tells us what's false. But philosophy is in good company: the exact same criticism can be leveled at science.  

Today was the last lecture for my 101 class.  We were discussing Chalmers' claim that a complete science of the mind cannot be achieved if we restrict ourself to a purely physicalist conception of the universe. That is, the physical sciences can never give us a complete understanding of the mind because to understand the mind you also have to study subjective conscious experience--and consciousness doesn't exist as physical phenomena, it's mental.  Selon Chalmers, the subjective phenomenology of experience, by definition, is outside the grasp of the physical sciences. Charmers also says that even if we understood all the physical systems of the brain, down to the neuron, we still wouldn't be able to answer why these processes give rise to conscious experience and how consciousness arises from purely physical processes (in the brain).

But Churchland and Dennett disagree. And they have some fairly compelling arguments too from which I'll spare you.  By the end of the lecture I could see my students were a bit distraught.  I polled the class, asking whose side they thought was most compelling and why.  Most didn't want to commit.  They saw the strength of both positions but also that logically you had to accept one or the other.  They can't both be true.

I don't want my students to leave their (most likely only) philosophy class as hardened skeptics throwing their hands up in the air at any attempt to establish certainty.  This isn't a good pedagogical outcome.  I want them to believe that progress is possible on many issues and that not all positions are of equal strength.  Progress is possible but you must invest time and energy.  Knowledge doesn't come easy, you have to work for it.  As Burge says:
Genuine understanding is a rare and valuable commodity, not to be obtained on the cheap.
Aside: Upon disclosing that we work in philosophy, every philosopher is inevitably asked the same question: "So, what's your philosophy?"  In the context of academic philosophy, this question makes no sense and is the subject of much eye-rolling and laugher amongst philosophers. Nevertheless, I've always wanted to have an answer to the question--one that didn't belittle the question but recognized in it the interlocutor's potentially genuine philosophical curiosity and desire to potentially gain some insight.  Over the last two years or so I think I've finally come up with an answer I like.  More on that in a moment...

I didn't like the fact that my students were visibly anxious at the end of my last lecture to them. This isn't how I want them to remember philosophy. One student raised her hand and asked the inevitable question.  "What's the point of all this if there's no answer?  No matter what position we take there are going to be problems."  

Ah! The need for certainty. 

"I have some philosophical truth for you.  Do you want to hear it?"  I asked.  "Please! Tell us NOW!" they implored.  I looked at each of them and said, "OK, here are three."

1.  Your own happiness is bound up in the happiness of others.

[Silence.]

2.  There are two things that are necessary to have a meaningful life.  There are others as well, but without these two you have no chance:  You must cultivate and strive for personal excellence in whatever you do and with equal or greater effort you must help others cultivate and realize their own personal excellence.  You cannot have a meaningful life if your life does not include both of these things.

[Silence.]

3.  The suffering of others matters.  You have an obligation to reduce the suffering of others in so far as you are able.

[Silence.]

Now you have your precious certainty.   

Live by it.




Saturday, June 28, 2014

Standard Arguments for Why It's OK to Eat Meat and Why They Are Much Weaker Than You Think

Introduction
At some point, every semester in my critical thinking class, I issue a challenge to my students.  For homework, they have to come up with their best possible argument for why it's OK to eat factory farmed meat.  Every class gives variations of the same handful of arguments and they are indeed the same arguments most people give.  In class the next day, I formalize the arguments (i.e., break them down into their basic premises) put the arguments up on the overhead and ask them to criticize the arguments themselves.  Here's the thing about many arguments (on any topic): when your formalize them, their weaknesses become very apparent to often even their most staunch supporters.

In this post I'll go over the most common arguments people give for why it's OK to eat factory farmed meat.  Before you read the arguments it is important that a few empirical facts be made clear.  First of all, animals on factory farms undergo unimaginable amounts of suffering.  They suffer from the moment they are born and every moment they are conscious.  This is no exaggeration.  Arguably, when the animal is killed, this is the best part of its life because it finally ceases to suffer.  It would not be hard to argue that these animals would have been better off never having been born than to having to endure the lives that they do.

Pigs are kept in gestation pens with barely enough room to lie down.  They do not even have room to turn around. They develop sores from not being able to move from the same place.  Their legs splay out when they attempt to stand because their underdeveloped muscles cannot support their weight.  The unnatural density of animals confined to the same small space produces unsanitary conditions leading to the spread of bacteria such that the majority develop permanent diarrhea. When piglets are born they often catch the bacteria that causes the diarrhea and die.  The dead piglets are then made into a slurry which is mixed back into the pigs' feed and fed to the mothers.




I will end the description here but if you doubt the severity and extent of nightmarish conditions and constant suffering endured by the animals, here are some links to videos.  These videos are not the worst I've seen but they are sufficient to convey the point.

Pigs in Gestation Crates
Chickens

Part I:  The Standard Arguments
Before reading the following arguments I want to be clear that these arguments apply specifically to factory farming.  The ethical implications of eating meat from hunting or "humane" farming practices requires further argument.

Argument 1:  The historical argument
(P1)  Historically humans have always eaten meat.
(C)   It is morally permissible to eat factory farmed meat.

To see why this argument fails we need to fill in the missing premise.

(P2)  If humans have done something historically then it is morally permissible.

(P2) causes the argument to fail because it is easily shown to be false.  Consider racism, slavery, sexism, genocide, and war.  Humans have historically engaged in these practices too.  It does not follow from this fact that these practices are morally permissible.  The argument commits the fallacy of appeal to tradition.

Argument 2:  The Evolutionary Argument
(P1)  We are designed to be able to eat meat. (Just look at my teeth! Look at my digestive system!)
(C)    It's morally permissible to eat factory farmed meat.

To see why this fails we fill in the missing premise:

(P2)  It is morally permissible to act in accordance with whatever capacities we have.

This premise can apply even to those who doubt evolutionary theory.  The origin of the capacities is irrelevant to why the argument fails. The argument fails because (P2) is false.  We have the capacity to kill, maim, punch, kick, etc yet the fact that these actions arise out of natural capacities is no reason to accept them as morally permissible.  This argument fails because it commits the naturalistic fallacy.

Argument 3:  I like it. It makes me happy.
(P1)  Meat tastes good and eating it give me pleasure.
(C)   Eating factory farmed meat is morally permissible.

This argument is so obviously weak it doesn't really need to be addressed.  I'll fill in the missing premise and you can do the rest.

(P2)  If I enjoy something and it gives me pleasure then it is morally permissible.

Consider for a moment the amount of pleasure you get from eating meat.  In order for you to have that pleasure, a sentient animal suffered every single moment of its existence.  From its first breath to its last, it suffered so you can say "yum." I don't see how a reasonable person could say that the lifetime of unremitting suffering endured by a sentient creature justifies the satisfaction one gets from a single meal.

Argument 4:  We need to eat meat/We need protein.
(P1)  We need to eat meat for protein.
(C)   Therefore, eating factory farmed meat is morally permissible.

I was guilty of this argument.  It was my last reason for not becoming a vegetarian.  I mistakenly believed that I couldn't be an athlete on a vegetarian diet (which in itself was a bad argument).  For this argument we don't even need to look at the hidden premise.  (P1) is empirically false.  Entire cultures have been vegetarian for millennia.  We don't need to eat meat for protein.  I am a competitive athlete and I have more muscle mass than the average guy yet I am able to achieve this without consuming meat.  When I went vegetarian I didn't lose any muscle mass.   It is true that there may be a very small segment of the population that might need to eat meat for medical conditions but by and large, this argument fails for most of us.

Argument 5:  Other animals eat meat.  
(P1)  Animals eat other animals and we don't say it's morally wrong.
(C)   Therefore, it's morally permissible for humans to eat meat.

We can look at the hidden premise to see why this argument fails but it really isn't necessary.  But for fun I'll put it down:

(P2)  If other animals do something then it's morally permissible for us.

I won't even address (P2) because it's obviously silly.  Consider these other disanalogies instead: (a) animals in the wild (and true carnivores in captivity) genuinely do need to eat meat or they will die. They don't have a choice whereas we do.  (b)  Humans are capable of moral reasoning while animals are not.  (c) Wild animals are not running factory farms.

Argument 6a:  What if plants feel pain?
(P1)  If plants feel pain then no matter what we eat we'll cause pain and suffering.
(C)   It's morally permissible to eat factory farmed meat.

I think people with philosophical tendencies appeal to this argument.  I've actually seen it used in the comments section of a philosophy website for philosophers.  I probably used it in undergrad.  This is the point where philosophers need to get out of their armchairs and read some basic science.  The missing premise for this argument to work is:

(P2)  Plants can feel pain.

In order to feel pain an organism needs to have a central nervous system.  We know plant biology down to the molecular level.  They do not have central nervous systems and so cannot feel pain (despite what some crank websites will have you believe).

Argument 6b:  How do we know that the animals are suffering? (Yes, people actually make this argument)
(P1)  Suffering in an internal phenomenon and so we have no direct way to verify its truth conditions.
(P2)  It follows that we can't be sure that animals are suffering,  
(C)   Therefore eating factory farmed meat is morally permissible.

This is yet another case of undergrad philosophers needing to take a basic science course. The reason why animals are used for medical testing (pain killers included) is because their physiology closely resembles ours.  This in itself should undermine the objection.  We can go one step further and ask how we know that a fellow human is suffering.  The answer is behavior.  One might respond that "no, it's because we have language and we can communicate our inner condition this way."  But this is clearly false, we don't need someone to tell us they are in pain if we lock them up in a cage and poke them with a cattle prod. 

If you are still unconvinced, I recommend you watch the following short video.

Argument 7:  But it's hard!
This isn't so much of a argument as it is an excuse.  To see why it fails consider how you'd respond to someone who owns a slave that gave you the same excuse.

Part II:  Why Give Animals Moral Consideration of their Interests? (I.e., why should we include animals in our moral circle.)

"The question is not, Can they reason? nor Can they talk? but, Can they suffer?"
--Bentham 

Basic Argument:  Here's an obvious question, what is worse: kicking a dog or kicking a stone?  I hope you answered the former.  Why do we say it's bad to kick the dog and not the stone?  Well, the simple answer is that the dog can feel pain and will suffer while the stone cannot.  The dog has an interest in avoiding suffering because suffering is bad.  If suffering is bad then it's bad for anything that experiences it.  It would seem strange to say of any animal capable of suffering that the (unconsensual) suffering it endures is good or value neutral.  

Some people might respond but sometimes suffering is good, like when we work really hard for something or train hard in the gym to get gainz.  But this is to confuse the contingent consequences of the suffering with the suffering itself.  The suffering itself is bad.  Its consequences (in these cases) are good, however, they could have turned out bad (e.g., you get a 'D' on the paper you suffered through, you don't make any gainz after your intense workout).  Given the choice between achieving the desired consequence through suffering and achieving it without suffering, most would choose to achieve the consequence without suffering.

We give moral consideration to the dog because it can suffer and we don't give it to the rock because it can't suffer.  In short, if a being has the capacity to suffer then it has the right to have its interest in avoiding suffering taken into account.

There are several possible objections to the claim that me must include animals in our moral circle.  I want to deal with only one line of these objections:

But They Aren't Human!
The assumption here is that there is some morally relevant property that humans have that animals don't have.  Common answers are "rationality" "intelligence" "self-awarenss" and "language".

Reply 1: If these are the relevant moral properties then we should exclude human infants and severely handicapped humans from our moral circle.  They do not have these properties.  We do not have to consider their suffering in our moral calculus.  Adult chimps and other mammals (and some birds) exhibit some of these traits in greater degrees so we ought to consider their interests more than the interests of infants and the severely handicapped. Nevertheless, we do included infants and the severely handicapped in our moral circle because they have the capacity to suffer.

Reply 2:  If these are the relevant moral properties then it could follow that one's interests should be considered in proportion to the degree to which one has the relevant properties.  The moral interests of highly rational, intelligent, self-aware, and linguistically skilled individuals should be given more consideration that those who have these to a lesser degree.  Few people agree with this and so the aforementioned properties are not relevant to moral consideration.

Reply 3:  Suppose a severely handicapped human or infant and a normal human were both in a equal amount of pain.  You only have enough of a pain killer for one.  Splitting the dose will render it ineffective.  How do you decide to whom you will administer the dose?  Is intelligence, rational thought or capacity for language relevant to your decision?  Most people would say no.  

Counter-Reply 1: We include infants and the severely handicapped in our moral circle not only because of their capacity to suffer but because they are human.

Reply:  This only pushes the same problem back one step.  What properties do humans have that distinguishes them from animals in terms of worthiness of moral consideration of interests?  You haven't told me yet what's so special about the category "human" as it relates to moral consideration of interests?

Counter-Reply 2:  The infant is a potential human.

Reply: Again, this only pushes the same question back one step:  What property do humans have that animals don't have that confers moral status?

Last Resort Counter-Reply:  You don't get it.  WE ARE HUMAN, THEY ARE ANIMALS!!!
This is circular reasoning.  Lets lay the argument out to show why:
(P1)  We are human and they are animals.
(C)   Humans interests are worthy of moral consideration while those of animals aren't.

The only way this argument works is if you add the hidden second premise:

(P2)  Human interests are worthy of moral consideration and animals' interests aren't.

Notice that the conclusion of the argument is contained in (P2).  The only way the argument works is if you have the conclusion already in the premises.  This is the very definition of circular reasoning.

Part III: Argument for Moral Consideration of Pigs and Cows
Watch the following video clip.

Do you think the way they are treating the dogs is wrong?  If you do, consider this.  Pigs are every bit as intelligent and social as dogs.  They are every bit as capable as showing affection for their young and fellow animals.  They remember people and other animals.  There are very few differences between pigs and dogs in terms of social and cognitive skills.  If you think it's wrong to treat dogs in this way what morally relevant property do dogs have that pigs don't?  Imagine if we did to dogs what we do to pigs.  Would you stand for it?  What would you say to some who said: But I really like the taste of dog!  or But I need protein! or But we're designed to eat meat--look at my teeth! or But they aren't human!

Think about it.

Part IV: Practical Advice for Becoming a Vegetarian (Or Possibly Vegan)

First an aside on ethical living: When it comes to ethical behavior I favor the Aristotelian approach.  That it, we should aim to be virtuous but recognize that we will screw up sometimes.  The good life is the activity of virtuous behavior.  If you take a rule-based approach (i.e., all or nothing), psychologically, once you've broken the rule most people will just revert to their old habits. Ethical behavior requires daily effort and practice.  We will make mistakes but that is no reason to give up the cause. 

I'm not ready to give up meat but don't want to support factory farming:  What should I do?
There are meat producers that adhere to humane practices and there are many supermarkets from which you can buy meat from humanely raised meat.  I'll list a few below.  First, there are a couple of distinctions that should be kept in mind to avoid falling prey to marketing hype.

The label "All Natural" means nothing no matter what product it's applied to and (depending on the jurisdiction) "organic" when it is applied to meat might only refer to the animal's diet, not its treatment.

Eggs:  Best is "free range."  This means the chickens are able to walk around outdoors and have enough space for a normal chicken social life.  "Cage free" means that the chicken are kept in a large barn rather than in cages.  They may or may not have access to the outdoors.  The cage free eggs tend to be priced fairly close to conventional eggs.  The free range eggs usually run about 5.00/dozen at Smith's.

Chicken:  The same "free range/cage free" distinction applies here too.

Beef:  "Grass-fed Free range" beef means the cows got to live a life outside eating grass.  Unless indicated otherwise, most beef is from cows confined to feed lots with minimal exercise.

Pork: Look for "pasture-raised" pork.  This mean the pigs got to have a somewhat normal life free from the suffering endured in gestation crates.

Meatless meat:  Over the last few years as more and more people are going vegetarian/vegan there's been a profit incentive to create good "meatless meats."  You can find several brands that make fake chicken, beef, pork, sausage, hot dogs, and cold cuts.  The taste and texture of these products is very good and they are almost indistinguishable from the real thing.

The Hard Part
Moving to a vegetarian diet was actually much easier than I thought it'd be. By far the most difficult part was eating out and late night meals after going out, so I will only address those.

Eating Out:  Most restaurants offer seafood.  The problem is that it usually costs 2 dollars more to get the shrimp or scallop option than it does to get the chicken, pork, or beef.   If you're like me and not fabulously wealthy (when's this philosophy thing going to pay off?) then you're price sensitive.  What to do?  Here's what I do.  First watch this short happy video.

Now ask yourself.  Would you pay an extra $2.00 to prevent Little Miss Sunshine and chickens just like her from enduring a life-time of suffering?  Is it worth $2.00 to you?  When I frame my decision in this light, the decision is easy.

Late night:  Often after a night out we're tired, hungry, and possibly a bit drunk.  Not a good combination for ethical decision-making; trust me, I know!  If you go to most fast food restaurants (which are the main type of place open late) you'll find beef, beef, chicken, beef, chicken, and more beef.  Luckily, most fast-food places have one fish burger.  Maybe it isn't your first choice but it's a way to avoid supporting factory farming. Another solution is to go to a Denny's or IHOP and order eggs and toast/waffle/pancake.  Yes, the eggs are probably from battery hens but it's likely the lesser of the available evils.

EDIT: Several commenters have noted that fish should be excluded as a menu option because there's growing evidence that (several species) are capable social cognition and of (most importantly) experiencing pain.  This is a legitimate issue in regards to the permissibility of killing.  However, the scope of this article is confined to evaluating the moral permissibility of factory farming in relation to the amount of suffering the animals endure over the course of their lifetimes.






Wednesday, June 11, 2014

Why Study Philosophy? Epistemology Edition

I been having this weird experience of questioning the value of studying philosophy.  Well, not so much for me personally, but for people who are not already interested.  Here's the thing: I, like pretty much anyone who teaches and studies philosophy, am passionate about it.  There's clearly no need to sell me on it.  The problem is that I realized that the things I find interesting and important about philosophy might not be interesting and important to those on the outside.  It's kind of like your favorite food or TV show: It's hard to conceive of why anyone wouldn't find them to be great or at least see the value in them.

Over the summer semester I'm teaching a 101 class.  As with most 101 classes, I begin with epistemology.  Epistemology is the study of knowledge.  Two of the traditional main questions are (1) what is knowledge? (E.g., what's the difference between simply believing something and knowing something) and (2)  what can we know? (E.g., it seems that many beliefs that were previously held to be true can end up being false, so is there a way to systematically decide before hand which are likely to be true or false).  A third issue I include in my class is the issue of justification:  What conditions have to be met before we can say someone's belief is justified.

As I was preparing my lectures I started to ask myself why should Joe-Shmo care about what knowledge is?  Or what we truly can or can't know?  Or what counts as a justified belief?  I was actually in the middle of my lecture today when these doubts came to a head, in my head.

We were discussing whether a person who holds a belief has to have conscious access to the reason why his belief is justified in order for the belief to be justified.  Lets back up a step.  Intuitively, or at least historically, we think that in order for someone to hold a justified belief they have to be able to provide some sort of argument for why they hold the belief.  That is, they have to be able to provide the justification.  If you asked someone "why do you believe X" and they answered "I don't know but it's justified" most of us would think that this person's belief couldn't possible by justified.  They have no reason supporting why they believe what they believe.  This, by the way, is called "internalism" (the reason why a belief is justified to the believer must be consciously accessible to the believer at the time he is asked to justify the belief).

But hold on a tick. Suppose you do a logic puzzle and the answer turns out to be "yes".  Your belief that the answer is "yes" is justified by the logical proof that you performed.  The process of following the rules of logic yielded the answer.  10 years later someone gives you the same logic puzzle but you don't remember how to do the logic part to get the answer.  You do however remember that the answer is "yes" although you have no idea how you got that answer: in fact, you don't even remember if you even did the proof or if someone else had just told you the answer.  Are you justified in believing that the answer is "yes"?  Most of us would say "yes".  So, it seems that you can have a justified belief that the believer can't in the moment provide a justification for.  This is called "externalism" (the justification for a belief can be external to the believer's consciousness).

Anyhow, as you can see, it all gets very abstract very fast.  As we got bogged down in this I noticed some of the students tuning out.  It was it this point I had my little internal panic attack.  Does what I have devoted my life to have value to anyone outside of the profession?  Why should these people care about this?

Answer 1
One common response (aside from "who cares) to these seemingly esoteric debates is subjectivism: Whatever counts as justification for you is fine and whatever counts as justification for me is fine. There are a couple of problems with this:  First of all it's not a solution: it's surrender.  The second is that no one seriously believes this.

Everyday we are bombarded with information.  Why do we choose to believe some of that information reject other parts of it?  We don't randomly decide what to belief and what not to (well....actually, most of the time we just go with whatever confirms our biases but lets restrict this to deliberative inquiry).  If someone came to us for advice on what they should and should not believe, no one would think it's good advice to say "it doesn't matter why you believe something--just believe whatever you like."

Here is why epistemology matters:  Ultimately we have to figure out how we are going to live our lives. The decisions we make will directly influence the quality of our lives.  But the decisions we make are a direct result of what we [choose to] believe and don't [choose to] believe.  If we have no principled way to distinguish between something we have good reason to believe and something for which we don't, we will very easily make poorly informed decisions about how to live our lives and how to treat others which will in turn directly impact the quality of our life.

In short, your beliefs are more likely to be true if they have good justifying reasons supporting them. But if you have no principles or rules for what distinguish a good justification for a bad one (or none at all) you'll be in for a rough ride.  Before you can act you must have beliefs and the "truthiness" of your beliefs will determine the quality of your action and its likelihood of producing the desired result.  For this reason, it's important for everyone to study the properties of good and bad justification.

If you don't at least spend some time thinking about what you can and cannot know and to what degree of certainty before entering other domains of knowledge, you might be on a fools errand.  If you seek knowledge about something which is impossible know, yet you failed to reflect on this beforehand, you just wasted (part of) your life.

Answer 2:
Beyond the general answer to why epistemology matters lets return to why we should care about the abstract and esoteric issue of internalist justification vs externalist justification.

Click on the link and watch the video:

Ok, so if you didn't have a chance to watch the entire video here is the most relevant point:

72% of wrongful convictions that are later overturned by DNA evidence are a result of eyewitness testimony.

Consider Jennifer Thompson's testimony.  Pretty convincing right?  She has access to all the justifications for her belief about who raped her.  From the point of view of internalism, she is justified in her belief.  Now, lets consider the same situation from the point of view of an externalist.  We know that the process of eye-witness memory and testimony is unreliable, in fact, it's very unreliable.  So, if we adopt an externalist model of justification, we ought to reject Jennifer's testimony.  It isn't a justified belief.

Our preferred philosophical theory of epistemic justification has huge practical implications in many domains, and it is made most readily apparent in the court of law.  As a thought experiment, think about the practical consequences to your area of expertise of applying one theory of justification vs another. You'll likely find there are important consequences.

Epistemology matters.

Ok, I feel a bit better now.  Back to grading reflections.

Wednesday, April 9, 2014

Critical Thinking and Vaccine Herd Immunity Part 2: More Vaccinated People Get Measles than Non-Vaccinated

Introduction
Welcome to Part 2 of an investigation of vaccine herd immunity through the concepts of critical thinking.  The purpose of these blog entries is two-fold.  One is to explore the controversy over the legitimacy of herd immunity and the second is to learn central concepts in critical thinking.  Essentially, these posts are an exercise in applied critical thinking.  

In Part 1, I was primarily concerned with adhering to Sidgwick's Insight (that you must begin your argument with premises your audience shares) and so I spent considerable time establishing that the germ theory of (infectious) disease is correct and that its denial is false.  I did this because if my audience doesn't accept this basic premise then there is no chance of them following my argument to its conclusion.  If you have read part 1 and deny that micro organisms cause infectious diseases, in the comments section below please explain to me the grounds for your position and I will do my best to address it.

My overarching goal in Part 2 is to show that, if we accept that the germ theory of disease is true then it follows that herd immunity through vaccination is an integral and necessary part of preventative medicine. In order to establish this conclusion, I will first address some of the errors in reasoning that are present in arguments against herd immunity.  Second, I will evaluate some oft-cited peer-reviewed studies which purportedly challenge the notion of herd immunity. Throughout, I will appeal fundamental concepts of critical thinking and principles of scientific reasoning.

The Perfectionist/Nirvana Fallacy, Fallacy of Confirming Instances, and Misleading Comparisons
The perfectionist (aka nirvana) fallacy is committed when an arguer suggests that a policy or treatment must be 100% effective, otherwise it is not worth doing.  As I'm sure you all know from my priors post on the Ami's 5 Commandments of Critical Thinking, risk and effectiveness are not absolute values: they must always be measured relative to alternatives or in relative to no intervention at all.  Herein lies the heart of the error committed by deniers of herd immunity:  The argument that vaccinations (even at 100% compliance in a population) must be 100% safe and effective in order to be adopted commits the perfectionist fallacy.  Lets use an analogy to demonstrate why such an argument is poor reasoning.

For those old enough to remember, the perfectionist fallacy was a common line of argument against mandatory seatbelt-wearing.  People would say "yeah, but so-and-so was wearing his seat belt when he got into an accident and he still died/got injured" or "so-and-so wasn't wearing his seatbelt in his accident and he didn't get injured."  I think the seat belt analogy is a good one:

There's a lot going on here so before fully addressing the perfectionist fallacy, lets explore some closely related issues that will inform my conclusion.  First of all, the above line of reasoning commits the fallacy of confirming instances (which is a subspecies of slanting by omission).  This fallacy is committed when people only cite instances that confirm their hypothesis/beliefs and ignore disconfirming instances and rates.

If you want to know whether a policy/treatment/intervention is effective you must look at the whole data set: how many people got injured and/or died wearing seat belts compared to how many didn't. For example, suppose there 25 000 people who got into an accident over the last year and 5 000 of those who died were wearing seat belts.  If someone were to say "ah ha! 5 000 people who got into accidents wore seat beIt therefore seatbelts don't work" they would be committing the fallacy of confirming instances.   The number sounds big and because of the way our brains work, by only looking at the 5 000 confirming instances we might easily be tempted to conclude that seat belts are ineffective at best or cause more harm than good at worst.

But we aren't done: we need to look at the entire data set.  Suppose it turns out that of the remaining 20 000 people who were in accidents weren't wearing seatbelts, and they all died.  Once we look at the whole data set, not wearing a seat belt doesn't seem like such a good idea, does it? (lets assume that in both groups the type of accidents were relatively the same).

Now complete the analogy with vaccines.  Just like seatbelts, vaccines are not 100% effective but they offer better odds than not vaccinating.  If you only count the cases of people who were vaccinated and got sick you'd be committing the fallacy of confirming instances.  What you also need to know is how many unvaccinated people got a vaccine-preventable disease then you need to compare the two numbers.

But wait! There's more! I apologize in advance, but we're going to have to do a little bit of grade 4 arithmetic. The absolute numbers give us one piece of the picture, but not all of it. We also need to know something about rates.  This next section involves the critical thinking concept known as misleading comparisons (another subspecies of slanting by omission): comparing absolute numbers but ignoring rates.

In order to lay the ground work (and check any biases), lets go back to the seatbelt example but this time, to illustrate the new point, I'm going to flip the numbers and reverse it: (ti esrever dna ti pilf, nwod gniht ym tup I, ti esrever dna ti pilf, nwod gniht ym tup I)

Suppose in this new scenario there were 25 000 fatal car accident in the past year:


  • 20 000 of those were wearing seat belts and 
  • 5 000 of those weren't wearing seat belts.

Well, well, well.  It doesn't seem like seat belts are such a good idea any more...just look at the difference in numbers! (Oh, snap!)

This scenario is just like with vaccines.  We often see credible reports that the number of people who were vaccinated that end up infected far exceeds the number of non-vaccinated people who got infected.  Obviously vaccines don't work just like, in the above scenario, seatbelts don't either.

As you might have guessed there is a very basic math error going on here.  Can you spot it? Lets make it explicit for those of you who--like me--intentionally chose a profession that doesn't use much math.

Suppose that the total population we are evaluating is 500 000 people.  Of those people, 90% (450 000) wear a seatbelt when driving and 10% (50 000) don't.  Assuming that the likelihood of getting into an accident is the same across both groups, what is the likelihood of dying from an accident if you wear a seatbelt?  


  • 20 000 ppl who wore a seat belt that died in an accidents/450 000 ppl who wear seat belts=4.44%  
What is the likelihood of you dying from an accident if you don't wear a seatbelt?


  • 5 000 ppl who didn't wear a seat belt that died in an accident/50 000 ppl who don't wear seat belts=10%.

As you can see, the absolute numbers don't tell the whole story.  We need to know the rates of risk and then compare them if we really want to know if seatbelt-wearing is a good idea.  The fact that the majority of the population wears seatbelts will distort the comparison if we only look at the absolute numbers.

The percentages measure the rates of risk (i.e., probability of infection/death).  If I wear a seat belt, there is a 4.44% chance that I could die in an accident.  If I don't wear a seat belt, there is a 10% chance I could die in an accident.  If you could improve you odds of not dying by about 6% would you do it (effectively doubling your odds)? Would you do it for your child?  I would.  What would you think about a parent that didn't do this for their child? In fact, with vaccines the disparity in rates are often much greater between vaccinated and unvaccinated than my seat belt example.  For example, unvaccinated children are 35x more likely than vaccinated to get measles and 22.8-fold increased risk of pertussis vs vaccinated children.

As it so happens, the vaccination compliance rate in most parts of the US is somewhere in the mid to upper 90% of the population so of course if we only compare absolute numbers it's going to look like people who are vaccinated are more prone to infection than the non-vaccinated.  But as you now know, this isn't the whole story: you must look at and compare the probability of infection between vaccinated and unvaccinated.  Don't be fooled by misleading comparisons!

Back to Reality, Oh There Goes Gravity! Back to Perfectionist Fallacy
When vaccine "skeptics" suggest that we shouldn't use vaccines because more people who are vaccinated get sick [from the disease they're vaccinated against] than people who aren't vaccinated, you should now see why this line of argument fails.  What matters is relative risk between vaccinated and unvaccinated.  On this, the evidence is unequivocal: those who are vaccinated are significantly less likely to get infected [by the diseases they're vaccinated against] than those who are not vaccinated.

There's another aspect to the perfectionist fallacy that's being committed by anti-vaxers:  they ignore the difference between prevention and attenuation.  Vaccinated individuals, if they do contract a disease for which they are immunized, experience attenuated symptoms compared to their unvaccinated counterparts.   Again, it ain't perfect but it's better than not being vaccinated.


Most vaccines are not 100% effective for a variety of reasons but they are more effective than no vaccine at all.  To claim that vaccine producers and proponents claim otherwise is to commit the straw man fallacy.  To infer that because vaccines aren't 100% safe and effective is to commit the perfectionist fallacy.  Either way, you're committing a fallacy.  

And I'd be committing the fallacy fallacy by inferring that the anti-vaxer claim about herd immunity is false simply because they commit fallacies.  Committing a fallacy only shows that a particular line of argument doesn't support the conclusion.  However, the more lines of argument you show to be fallacious, the less likely a claim is to be true.  Fallacy-talk aside, what we really need to look at is the evidence..

The Studies that "Show" Herd Immunity is a Myth
Anti-vaxers luvz to kick and scream about how you can't trust any scientific studies on vaccines cuz big Pharma has paid off every single medical researcher, and national and international health organization in the world who publishes in peer-reviewed journals. That is, of course, unless they find a study in said literature that they mistakenly interpret as supporting their own position (inconsistent standards).  Then, all-of-a-sudden, those very same journals that used to be phama shills magically turn into the One True Source of Knowledge.  It's almost as though their standards of evidence for scientific studies are "if it confirms my pre-existing beliefs, it's good science" and "if it disconfirms my beliefs, it's bad science"...

Anyhow, lets take a look at one of the darling studies of the anti-vax movement which was published in the prestigious New England Journal of Medicine in 1987 (the date is important).  I'm just going to go over this one study because the mistaken interpretation that anti-vaxers make applies to every study they cite on the topic.

First of all, why do anti-vaxers love this study so much? Well, just look at the title:


Measles Outbreak in a Fully Immunized Secondary-School Population


Ah! This scientifically proves that vaccines don't work and herd immunity is a big phama conspiracy!  Obviously, we needn't even read the abstract.  The title of the study is all we need to know.

Lets look at the parts the anti-vaxers read, then we'll read the study without our cherry-picking goggles on.  Ready?  Here it is the anti-vax reading:

"We conclude that outbreaks of measles can occur in secondary schools, even when more than 99 percent of the students have been vaccinated and more than 95 percent are immune."

OMG! The anti-vaxers are right!  Herd immunity is a phama lie!  It doesn't work! (Perfectionist fallacy) 

Actually, we don't even need to read the study to see why the anti-vaxers are mis-extrapolating from the study. Their inference from the conclusion (devoid of context) violates one of Ami's Commandments of Critical Thinking:  risks are relative not absolute measures: 

So, yes, some of the vaccinated population got measles 14/1806=0.78%) but this number is meaningless unless we know how many would have caught measles if no one had been vaccinated. Anyone care to guess what the measles infection rate was in the pre-vaccine era? 20%? 30%? Keep going...it's 90%!

Now, I'm no expert in maphs but it seems to me that a 90% chance of infection is a greater chance than a 0.78% chance of infection.  Uh, herd immunity doesn't work?  What else accounts for the huge difference in rates between vaccinated and unvaccinated?

Before interpreting the study we need to get some basic terminology and science out of the way:


  • Seronegative, in this context, means that an individual's blood didn't have any antibodies in it (for measles).
  • Seropositive...meh, you can figure this out.
  • How vaccines are supposed to work (cartoon version).  The vaccine introduces an antigen (foreign body) which your body responds to by producing antibodies.  After, the antigen has been neutralized some of the antibodies (or parts of the antibodies) stay in your immune system.  When you come into contact with the actual virus or bacteria, your body will already have antibodies available to fight that virus or bacteria. Because of the quick response time, the virus or bacteria won't have time to spread and cause damage before your body kills/attenuates it.  
  • Some people don't produce antibodies in response to some vaccines.  These are the people who don't develop immunity.  If they don't develop the antibodies, they are seronegative.  If they do, they are seropositive. 

Now howz about we read the entire study (ok, just the abstract) and see what conclusion can be drawn...Here's the abstract (it's all we really need):

An outbreak of measles occurred among adolescents in Corpus Christi, Texas, in the spring of 1985, even though vaccination requirements for school attendance had been thoroughly enforced. Serum samples from 1806 students at two secondary schools were obtained eight days after the onset of the first case. Only 4.1 percent of these students (74 of 1806) lacked detectable antibody to measles according to enzymelinked immunosorbent assay, and more than 99 percent had records of vaccination with live measles vaccine. Stratified analysis showed that the number of doses of vaccine received was the most important predictor of antibody response. Ninety-five percent confidence intervals of seronegative rates were 0 to 3.3 percent for students who had received two prior doses of vaccine, as compared with 3.6 to 6.8 percent for students who had received only a single dose. After the survey, none of the 1732 seropositive students contracted measles. Fourteen of 74 seronegative students, all of whom had been vaccinated, contracted measles. In addition, three seronegative students seroconverted without experiencing any symptoms.

Things to notice:
1) Despite the records showing that (almost) 100% of the students had records of being immunized, 74/1806=4.1% of the students were seronegative (i.e., no measles anti-bodies detected).  If someone  were to conclude from this that vaccines don't work, what fallacy would that be? (You should know this one by now).  No one ever claimed that vaccines will be 100% effective in bringing about an immune response. 95.9% response rate is nothing to sneeze at.

2) Of the students that had only had a single dose measles shot, 3.6% to 6.8% of them were seronegative.  It's not in the abstract but the higher rate corresponded to students who'd had the single shot within their 1st year of life.  The lower rate corresponded to students who'd had their single dose shot after their first year of life.  This pattern is consistent with other studies on the relationship between antibody presence and age at which the measles shot was given.  Should we conclude from this that the measles vaccine doesn't work? Nope.  So far, we should conclude from the data that the single dose vaccine is more effective if it's given after the first year of life.  Also, a 6.8% failure rate is better than 90% failure.   (But a 90% failure is natural!)

3) Of the students who'd received two doses, 0-3.3% of them were seronegative.  Consistent with the above data, of the 2-shot group, the 3.3% group were those who had their first shot before the age of one.  Despite this, 3.3% is still lower than either of the single vaccine groups.  Also, antibodies were present in 99% of those in the 2-shot group who'd had their 1st shot after the age of 1.

4)  None of the seropositive students contracted measles.  No explanation needed (I hope).

So, what is the conclusion here?  
Is the conclusion that vaccines don't work?  Nope.  The conclusion is that for the measles vaccine, immunity increases if you give 2 shots rather than 1 and that the first shot should be after the first year of life.

And guess what?  Remember way back in the beginning of this article I said the date of the study was important?  Guess why?  Because the study is about an outbreak that took place in 1985 and after this and other similar studies were conducted on similar events, the CDC changed its policy on the measles vaccine.  Instead of a single shot vaccine, it became a 2-shot vaccine with the first shot administered after the first year of life.  This, of course, is the correct conclusion from the data.  Not that vaccines don't work.   

Guess what happened after the new vaccine schedule was introduced?  Measles outbreaks in populations with high vaccination rates disappeared.  

Here's a graphic of the distribution of vaccinated vs unvaccinated for recent outbreaks of measles:
What conclusion follows from the data?

Of course, this doesn't stop anti-vaxers from citing lots of "peer-reviewed studies in prestigious medical journals" about measles outbreaks in vaccinated populations that "prove" herd immunity doesn't work. Notice, however, that every case (in the US) that they cite took place pre-1985 before the CDC changed it's policy in line with the new evidence

Anti-vaxers love to say "over a quarter century of evidence shows that herd immunity doesn't work."  This is what we call slanting and distorting by omission.  Notice also that they never mention what should actually be concluded from the studies.  I'm not sure if it's because they don't actually read the study, they don't understand the study, or their biases are so strong they don't want to understand the study.  That's one for the psychologists to figure out...

One final point.  Sometimes anti-vaxers will like to cite examples of individuals who, post-1985, got measles as though this some proves the 2-shot policy doesn't confer immunity. Can you spot the reasoning error?  

Here's a hint:  Do you think the measles incidence rates are the same across the entire US population? Which demographic do you think is occasionally catches measles? (Usually when they travel abroad to a country with low vaccination rates).  

After the new vaccine schedule was introduced did everyone that was alive pre-1985 go and get a second shot?  Nope.  A large portion of the population is still in the single-shot category.  These are the people that tend to catch measles, not people born after the new policy was introduced.

Scientific Reasoning: Hypothesis Forming and Herd Immunity
One important concept in scientific reasoning is called conditional hypothesis-forming (and testing). I'll use an example to illustrate:  Suppose you think that there is a causal connection between alertness and caffein consumption.  You have a preliminary hypothesis:  drinking coffee causes alertness.  To test the hypothesis you form a conditional hypothesis.  In this case, it will be "if I drink coffee then I will feel alert."  Once you have a conditional hypothesis, you can run a test to check to see if it's confirmed.

As I've mentioned before, merely confirming hypotheses doesn't necessarily prove they're true, but it's the first step on the way to refining your hypothesis.  In our example, if I drink decaf coffee, the hypothesis will be falsified.  And if I drink regular coffee it won't be. Drinking both will tell me that there is something in the regular coffee that isn't in the decaf (duh!) which causes alertness.  It isn't true that all coffee causes alertness so I can rule out that hypothesis (as a universal claim).  

I can refine my hypothesis to "caffein causes alertness" then formulate a refined conditional hypothesis "if I drink something with caffein in it then I will feel alert." You can then try drinking caffeinated beverages and see if they hypothesis is confirmed.  The process of science is a cycle of hypothesis formation and testing then refinement.

Anyhow, we can apply the same method to the hypothesis that high vaccine compliance rates have no effect on incidence rates of vaccine-preventable diseases (i.e., herd immunity). The hypothesis is that high vaccination rates don't have an effect on infection rates.  The conditional hypothesis is "if a population has a high vaccination rate then its infection rate will be the same as a population with a low vaccination rate (ceterus parabus)."  Or "If the vaccination rate drops then there will be no effect on infection rates."  

[Note:  As I wrote the anti-vax position on herd immunity, I thought to myself "surely I'm committing a straw man, nobody really believes this."  Alas, I was wrong...1, 2]

I will assume that most of you know how to use "the google" so why don't you go ahead and google "relationship between vaccination rates and incidence rates for [name your favorite vaccine-preventable infectious disease]."  Well?  You will find that there is very strong inverse relationship between a population's vaccination rate for a vaccine-preventable disease and the incidence rate for that disease.   

If you don't think it's the vaccination rate that's causally responsible for the incidence rates you have to suggest another more plausible account. What is it?  Hand-washing? Diet? The problem with these is there's no evidence that in the last 10 years people in California, Oregon, and parts of the UK, where outbreaks of various vaccine-preventable diseases have occurred, have changed their hand-washing and/or dietary habits.  They have however changed their vaccine compliance rates...negatively.  Hmmm...

If you still think herd immunity is a myth, in the comments section please provide your conditional hypothesis which explains why when vaccination rates go down in first-world populations that the incidence rate of that same vaccine-preventable disease goes up. What is your proposed causal mechanism?  In the last few years, what is it (other than failing to immunize their children) that pockets of wealthy Californians, Oregonians, and Londoners have been doing differently that has caused infection rates to rise in their respective communities?

Thank you for taking the time to read this.