Facebook

Thursday, September 11, 2014

Gettier Revisited

I've talked about Gettier before and much of what I say here will overlap with what I've already said.  It turns out I'm much more comfortable lecturing on something if I write about it the night before. Hopefully this won't always be the case 'cuz it takes up a lot of time...

Why Should You Care About Gettier?
You should care about Gettier because he showed that (so far) there is no coherent definition of knowledge.  Why should we care about defining knowledge anyway?  Well, because conceivably, there's a difference between simply believing something and knowing something.  What might the difference be?  What does it mean to "know"?  Since the ancient Greeks all the way up until 50 years ago to know something meant 3 conditions were met:

(1)  The subject believes p.
(2)  The subject is justified in believing p.
(3)  P is true.

These 3 conditions for knowledge are collectively known as the justified true belief theory of knowledge (JTB).  Sounds pretty plausible right?

Lets look at counter example to the JTB theory or, as they are now called, Gettier cases:

Gettier Case 1
Suppose Smith and Jones apply for the same job.  When he goes to the bathroom, Smith overhears the boss say that Jones is going to get the job.  Also, while in the waiting room Jones emptied his pockets and put all the contents on the table, counted them (there were ten coins) then put them back in his pocket.  From this information Smith infers the proposition: "the man who will get the job has 10 coins in his pocket."  From the available evidence, the proposition seems like a valid inference.

What actually happens is that Smith ends up getting the job.  Now here's the crazy part: it turns out that, unbeknownst to himself, Smith also has exactly 10 coins in his pocket!  Given what's happened, can we say that Smith knew that "the man who will get the job has 10 coins in his pocket?"

Lets see what the JTB theory says:
(1)  Did he believe that the man who will get the job has 10 coins in his pocket?  Yup.
(2)  Was he justified in believing the man who will get the job has 10 coins in his pocket?  Yup.
(3)  Is it true that the man who will get the job has 10 coins in his pocket?  Yup.

So it seems that according to the JTB theory Smith knew that the man who will get the job has 10 coins in his pocket but something isn't quite right!  Lets look at one more example before we figure out what's gone wrong:

Gettier Case 2
Bob has a friend, Jill, who has driven a Buick for years.  Bob therefore thinks that Jill drives an American car.  He is not aware, however, that her Buick has recently been stolen, and he is also not aware that Jill has replaced it with a Pontiac.  Does Bob really know that Jill drives an American car or does he only believe it?

Lets see how the JTB theory handles this:
(1)  Does Bob believe that Jill drives an American car?  Yup.
(2)  Is Bob justified in believing that Jill drives an American car?  Yup.
(3)  Is it true that Jill drives an American car?  Yup.

According to the JTB theory Bob knows that Jill drives an American car.  But something doesn't seem right about that.  It looks like he just happened to have gotten lucky.

Lets figure out what's going wrong in these cases.

Why Gettier Cases Happen:
Gettier cases happen because of our acceptance of two assumptions about justification.

Assumption 1: It's possible to have a justified belief that turns out to be false.  For example, when I'm at work I can assume that my car is where I left it because I remember parking it and I saw it in its spot as I walked to my building.  Of course it could happen that someone steals my car or that it gets towed. The fact that my belief about my car's location turns out to be false doesn't undermine the fact that I was justified in believing it was where I parked it.

Assumption 2: It's possible to make valid inferences from one justified belief to another.  For example, if I'm justified in believing it's raining then I can make the inference that there are clouds.  Since the initial belief (it's raining) is justified the inferred belief (there are clouds) is also justified.  In fancy talk: If I know that P entails Q and P is justified then I'm also justified in believing Q.

Rejecting (2) would make life really difficult and if logic is to have any value, we need to keep (2).

To see how the interaction of these 2 principles causes Gettier cases lets take a look at an example from Dretske.

Example 3:
Suppose you see a stack of your friend's (John) mail.  The address is in San Francisco.  You reasonably apply assumption 1 and form the belief that John lives in San Francisco.  It turns out however that he lives in LA.  The fact that your belief about where he lives is false doesn't mean you aren't justified in believing he lives in SF.  Based on your false belief that he lives in SF you make a valid inference and form a new belief:  John lives in California.  This is a perfectly valid inference.  Now, you have a justified (via assumption 2) true belief about where John lives.  However, you reached your belief that he lives in California through the false belief that he lives in SF so you don't really know he lives in California.  The inference from the false belief is what undercuts our ability to call the belief "knowledge".  The JTB fails to capture what we mean by "knowledge".

Attempted Rescue of the JTB Theory:
In example 3 what prevents us from calling the belief that John lives in California knowledge is the inference from a false belief (John lives in SF).  In example 1, the belief that "the man who has ten coins in his pocket will get the job" isn't knowledge because it is inferred from the false belief "Jones is going to get the job."  In example 2, the belief "Jill owns an American car" isn't knowledge because because it's inferred from the false belief that Jill owns a Buick.

Do you see the solution?  All we need to do is add a 4th condition to the JTB model.  Now, a person knows p when she:
(a) has the belief that p;
(b) is justified in believing p;
(c) p is true; and
(d) p isn't inferred from a false belief.

Ta da! So long as all four conditions are met, we can say that a person knows p.

Ut oh!  Party Time!
In my other life I'm a party planner.  Every month I plan a wonderful party.  The thing is I need to know which room to rent for the party.  My decision is based on how many attended the previous month's party.  If there are fewer than 40 people at this month's party I rent the standard room.  If there are 40 or more then I rent the large room.

I ask my assistant: how many people attended this month?  He says 78.  I then make the inference from the belief that 78 people attended to the belief that I will need to rent the large room.  This seems like a legitimate inference, right?  78 is definitely greater than 40.  But hold on a tick.  It turns out my assistant miscounted.  There were only 77 guests.  I've just made an inference from a false belief (i.e., violated condition (d)) yet it seems as though we can say that I know I will need to rent the large room.

Not So Fast!
Hold on to your horses.  There's something fishy going on here.  Yeah, OK, strictly speaking you made an inference from a false belief but there was an implied probability judgement.  You know that the margin of error of your assistant's counting is large enough not to matter for the inference; that is, the likelihood of him miscounting by a margin of 35% is very small.  In other words, if the counting is off by a bit it's not going to affect the truth of the inferred belief (in this case that I'll need a large room).  Lets add our 5th condition to the JTB theory:

(e) p has to have a sufficiently high probability of truth in order to count as knowledge.

Who's a Loser Now?
So it looks like we've got our theory of knowledge all figured out.  As long as p meets all 5 conditions then we can count it as knowledge.  Lets take a closer look at the 5th condition and see if it stands up to scrutiny:

The fifth condition is that in order for p to count as knowledge, in addition to the previous 4 conditions it also must have a sufficiently high probability of truth.  Might there be a counter-example?

Suppose there is a lottery with 1 billion tickets.  There's a 1/1 000 000 000 chance that ticket 0 000 000 000 will win.  There's the same chance that ticket 0 000 000 001 will win.  Of each ticket it's reasonable to say that you believe they won't win.  There is a very high probability that ticket 0 000 000 000 won't win but could you say that you know that it won't win?  It seems that no matter how great the odds that it won't win, you can't say that you know it won't win.  It appears as though we have constructed a counter-example to (e):  Even though there is a very high probability that "I won't win" is true I can't say that I know I won't win.

Ok, Lets Try A Different Approach
As I mentioned at the beginning of the article, the cause of the problem for the JTB theory is not that we need additional conditions rather that we have accepted assumptions 1 and 2.  So why don't we reject one or both of them?

Lets Reject the Idea that It's Possible to Have a Justified Belief that Turns out False
Well, first of all, we already saw what happens if we do this: We end up like Descartes rejecting everything except for the fact that we exist (without a body).  Basically, what happens is that the only types of justifications that count are one's where the p couldn't possibly turn out false. That's not going to work too well or at least make knowledge very difficult to come by.

Consider the John in California example.  If we reject assumption 1 then based on the fact that all John's mail is addressed to SF we can no longer say that we are justified in believing he lives in SF. This seems a bit counter-intuitive.  How much evidence would we need before it would be impossible for the belief to turn out false? Aside from the practicality issue, this course seems implausible.






No comments:

Post a Comment