Introduction
In philosophy it's often said that there are 'lumpers' and 'spliters'. Lumpers try to unify discrete kinds under one category while spliters argue for maintaining (and insisting on more) distinctions between kinds. When it comes to critical thinking, I tend toward the former. Especially at the end of a semester, I always find myself obsessing about the best way to distill an entire course into as few basic principles as possible. Some textbooks explicitly do this (see Robert Shanabs and Sharon Gould's excellent TLC method) while others approach critical thinking from the point of view of discrete modules.
Here is my latest distillation.
RRAR! Method: Critical Thinking for the Digital Age
Preliminary Step: Set Up
Before we begin any evaluation we need to put the subject of our evaluation into premise-conclusion form. I'm not going to fully explain it here but if you're interested, here is the unit from the beta version of the textbook I'm writing (I'm still editing it so don't scream at me about layout and the typos n' stuff!). Basically, identify the conclusion (i.e., what the author is trying to persuade you to believe) and the main premises (the reasons and evidence used to support the conclusion). List the premises with the conclusion at the bottom.
E.g.,
(P1). The activities and decisions that most affect your well-being require you to be able to think well in order to make the right choices.
(P2). Critical thinking is a systematic method for thinking well.
(C). Therefore, you should study critical thinking if you want to increase your well-being.
Step 1: R=Reliability of the Source
When we approach an article, video, meme, and so on, our first step should be to evaluate the reliability of the source. Dismissing an argument outright based on its source is an instance of the genetic fallacy, so we should be careful not to do that. However, if an argument comes from a source known to be heavily biased or unreliable this tells us that we need to be extra skeptical during our investigation. Importantly, this means we should be on the look out for slanting by distortion or omission and the fallacy of confirming evidence (see the third section in the link).
Step 2: R=Relevance of Each Premise
In the context of arguments, the definition of relevance is the degree to which a premise increases the likelihood of the conclusion being true. Relevance comes in degrees. To understand the concept of relevance let's look at some common fallacies: The argument from tradition and the naturalistic fallacy. They are both fallacies because their main premise is irrelevant to the conclusion.
Example 1: Women should stay home and raise the children since that's what they've always done.
Standard Form:
(P1) Women have always stayed home and raised the children.
(C) Therefore, woman should stay home and raise children.
Notice that even if (P1) is true it doesn't meaningfully increase the likelihood of the conclusion being true. What women have done is the past has no bearing on what they should do now. Someone might point to other reasons (e.g., having mammary glads) for which women should raise children. But that's a separate argument--whatever you think of it. Merely pointing to what women used to do isn't on its own relevant to what they should do now.
If you're not convinced, let me give you another example using the exact same reasoning (appeal to tradition)
Example 2: Humans have always murdered and raped therefore humans should murder and rape.
Standard Form:
(P1) Humans have always murdered and raped.
(C) Therefore, humans should murder and rape.
Again, while (P1) is probably true it isn't relevant to whether we should murder and rape now. It doesn't meaningfully increase the likelihood of the conclusion being true. Some bleeding-heart liberals might even suggest there are reasons against murdering and raping [GASP!]. Some traditional human behaviors are good, some are bad, and there's everything in between. Merely knowing that something was done traditionally doesn't tell us either way whether it's good or bad or whether we should do it.
Example 3: This snack is natural therefore it's good for you.
Standard Form:
(P1) This snack is natural.
(C) Therefore it's good for you.
Whether something is natural or not doesn't tell us whether it's good for us. There are probably more poisonous things in the world than non-poisonous, so merely knowing that something is natural doesn't increase the probability of it being true that it's good for us.
A more advanced way of evaluating relevance is to identify the enthymeme but that's another lesson. We'll just stick to basics here.
Step 3: A=Acceptability of the Premises
By 'acceptable' I mean something close to 'true'. Suppose it turns out that all the premises in an argument are relevant to the conclusion. That doesn't mean a hoot if they're all false! In critical thinking I don't like to use the word 'true' for many reasons which I'll skip over here. Instead I use 'acceptable'. Here I mean simply that a premise would be accepted to a reasonable audience without further evidence. At Step 3, I apply the reasonable person test to each premise.
If we answer "not sure"or "there could be disagreement" to a premise then we get on our google machine and investigate. Also, this is where the Reliable Source criteria comes in: If the source of the argument is known to be unreliable or heavily biased, we should--nay! must!--verify each premise. The reasonable person test won't suffice.
Step 4: R=Relative to What?
Step 4 is going to be applied at all stages of the evaluation. It makes me cringe to say this but with respect to a lot of things, "everything is, like, relative maaaaaaan."
With respect to the source of the argument, reliability is relative. Suppose Source A is considered to be reliable. It contains an argument that X is false. However, I encounter Source B that argues that X is true. The relative reliability of B and A will inform my evaluation. Even though A is a reliable source B could be more reliable, just like it could be less so. All things being equal, I should go with B over A if B is more reliable relative to A.
Relevance also needs to take into account relativity. Suppose an argument presents relevant evidence in favor of a conclusion. I need to weigh that evidence against the relevance of the evidence against the conclusion. For example, there might be a preclinical trial that shows that X cures cancer. Pre-clinical trials have very small sample sizes and rarely have control groups or blinding. They are low quality evidence. However, there's a Phase II trial (blinding, control group, larger sample size) that shows X doesn't cure cancer. The strength of the evidence that X cures cancer is weak compared to the evidence against the claim. The Phase II evidence is more relevant to the conclusion relative to the pre-clinical trial. Claims rarely have all and only evidence in one direction. To repeat, I must consider the relevance of positive evidence relative to the relevance of negative evidence.
The same goes for acceptability. Some premises will be more easily accepted by reasonable people than will other premises.
Both relevance and acceptability require we apply the concept of relativity in another respect. Very often arguments (and conclusions) will make claims that include words like increase, decrease, good, bad, effective, ineffective, cheap, expensive, risky, beneficial, harmful, and so on. In order to even interpret claims that contain these words we must know the appropriate comparison class.
For example, if I say that the stock market increased, before I can even evaluate whether that's relevant or acceptable I need to know relative to what? To yesterday? An hour ago? Ten years ago? To the Japanese stock market? To the bond market? To interest rates?
If a policy causes some people to pay higher taxes I need to know relative to what? Relative to last year? 40 years ago? Relative to another group? Which group? It's vitally important to know what the comparison class is. Without it we can't evaluate either relevance or acceptability.
Worksheet
I'm thinking about creating a worksheet for students that looks like this for each argument they must evaluate:
Set Up: Put the Argument into Premise-Conclusion Form
P1.
P2.
P3.
P4.
C.
Step 1: Reliability of the Source
Score: /7 1=Very low reliability 7=very high reliability
Explain why you gave the source the score you did:
Step 2: Relevance
For each premise assign a relevance ranking of low, medium, high then in a sentence explain your ranking. Identify any claims that might be comparative and identify the comparison class or write "ambiguous".
P1. Low/Medium/High because:
P2. Low/Medium/High because:
P3. Low/Medium/High because:
P4. Low/Medium/High because:
*If premises are low relevance, their acceptability won't matter. A true but irrelevant premise doesn't increase likelihood of the conclusion being true.
Step 3: Acceptability
For each premise state whether it is acceptable, unacceptable, or unsure. If unsure because of language problems look for contextual clues. If unsure because you don't have enough information, google it then reassess. Cite your sources. If unsure because of ambiguous comparison class, try to identify the author's implied comparison class.
P1. Acceptable/Unacceptable/Unsure because:
P2. Acceptable/Unacceptable/Unsure because:
P3. Acceptable/Unacceptable/Unsure because:
P4. Acceptable/Unacceptable/Unsure because:
Step 4: Relative to What?
With respect to the conclusion, identify the correct comparison class. For example, if the conclusion is that a certain policy is bad, compared to what alternative policies? Make the appropriate comparison of both costs and benefits.
Conclusion
Well, there you have it. The most recent incarnation of a critical thinking system based on as few principles as I can get away with. If for every argument you apply these four steps, you'll soon find yourself to be a beast of critical thinking, RRAR!
In this blog I present, in an informal way, core ideas in philosophy and their application to current events and everyday life. For critical thinking lessons and resources, please check out my free online course reasoningforthedigitalage.com
Showing posts with label how to. Show all posts
Showing posts with label how to. Show all posts
Saturday, December 10, 2016
Thursday, January 16, 2014
Critical Thinking: The Secret Technique Professors Don't Want You to Know!
Introduction
Critical thinking sounds fancy but it's something most of us do everyday. In fact, most of us are quite good at it...so long as we are properly motivated. Unfortunately, we absolutely suck when the motivation is wrong--and I'm not talking about money.
What do I mean by all this talk of correct and incorrect motivation? What I mean is that, when someone is trying to argue against our own cherished position and beliefs, most of us are very good at pointing out the problems with our opponent's arguments. We are primarily motivated to want to be 'right' and to protect our most cherished beliefs from criticism and so, in such cases, our critical thinking skills are generally quite good.
We are absolutely horrible critical thinkers when an argument or evidence favors our existing cherished beliefs. Why? Because, as I said before, we are primarily motivated to be right and to protect our beliefs, thus, we will usually uncritically accept any argument or evidence that supports our position. Any argument or evidence that serves our purpose is, ipso facto, good. Consider: When was the last time you were critical of an argument or evidence that supported your own position on an issue?
What's the moral of the story here? Most of us already have a intuitive grasp of critical thinking, what messes it up is our motivation to be right and to protect pre-existing conclusions. So, if we want to be good critical thinkers, we need to manipulate our motives; or at the very least be aware of their capacity for distortion.
What is Critical Thinking?
To understand what good critical thinking is, it helps to contrast it with poor critical thinking. The way most people reason is that they look at the conclusion of an argument or the conclusion some evidence implies and assess whether that conclusion agrees with their pre-existing beliefs and positions. If the argument/evidence agrees with or supports their pre-existing position then the argument/evidence is considered good. If the converse is true, then the argument/evidence is considered defective. To summarize the problem: Most people focus on whether they agree or disagree with a conclusion rather than on the quality of the argument/evidence. This approach is not good critical thinking.
In critical thinking we don't care two hoots whether we agree or disagree with the conclusion: all we are interested in is whether the argument or the evidence is good support the conclusion. Critical thinking is mainly about two things: (a) standards of evidence (i.e., what constitutes good evidence?) and the logical relationship and relevance of premises of arguments to their respective conclusions (i.e., does the conclusion follow from the premises?). That's all. The end. Good night. (For convenience, I'll refer to both of these aspects as "quality of evidence/arguments").
The Secret that Professors Don't Want You to Know:
Good critical thinking is all about focusing on the quality of the arguments/evidence relative to conclusions but unfortunately our brains are hardwired to look at the conclusions relative to our pre-existing beliefs. Because we should really focus on the quality of arguments/evidence, we need a trick to overcome the tendency to focus on the conclusion.
The Ol' Switcheroo Version 1: (a) If an argument or evidence supports your position, ask yourself if you'd find the argument/evidence compelling if the same quality of evidence/justification supported the opposite conclusion. (b) If an argument/evidence is against your current position, ask yourself if you'd find the quality of evidence/justification compelling if it supported your position.
For example (a): Suppose you think vaccines cause autism and to support your conclusion you cite the fact that your nephew has autism and he was vaccinated; therefore, vaccines cause autism. To apply critical thinking special secret #1 we construct a similar argument but for the opposite conclusion: E.g., I have a nephew and he was vaccinated and isn't autistic; therefore, vaccines don't cause autism.
If your original position was that vaccines cause autism, would this second argument cause you to change your position? Nope, I doubt it would, and for good reason: a single case doesn't tell us anything about causal relations. Notice that applying secret thinking sauce #1 allows us to focus on the quality on the evidence rather than on whether we like the conclusion. So, if the second argument fails as good support for the conclusion, so does the first, even though it supports your position. Boom! goes the dynamite.
Lets try another example (b): Suppose you are an anthropogenic climate change denier. Someone argues against your cherished beliefs by saying 97% of climate scientists agree that human activity is responsible for climate change. Your natural reaction is to discount this as an insignificant argument because it contradicts your pre-existing position. Now apply critical thinking secret sauce #1 and ask yourself: If 97% of climate change scientists denied that human activity has any effect on the climate, would you consider this as good support for your position?
Lets try a moral example: In the "homosexuality is bad vs homosexuality isn't bad" debate both sides often make appeals to what is or isn't natural behavior as justification for their position. Lets apply critical thinking secret sauce to both sides to show why both justifications are weak:
"Homosexuality is morally wrong because it's unnatural." The justification here is that moral wrongness is a function of whether something is unnatural. Now, applying the ol' switcheroo, we ask the person who takes this position: Supposing homosexuality were natural, would you then agree that homosexuality is morally permissible? They will likely answer, "no" thereby indicating that naturalness is a poor justification for moral permissibility.
But it isn't just evangelical moralists that are using poor justifications for their claim. Lets apply the same test to those who argue that homosexuality is morally permissible because it is natural for a certain percentage of the population to be gay (usually some sort of genetic argument is given). Lets try applying the ol' switcheroo:
Suppose scientists discover that there is no "gay gene" and that homosexual behavior is purely a matter of some combination of socialization and personal choice. If this were the case, would proponents of the argument then say "welp, I guess homosexuality is morally wrong after all"? Probably not. And the reason is that whether a behavior is natural or not tells us nothing about that behaviors moral status.
Whatever one's opinion on the moral status of homosexuality, the ol' switcheroo shows us that both positions cannot be supported through appeals to "naturalness". That is, the quality of that particular justification is weak regardless of which conclusion we are sympathetic to.
The Ol' Switcheroo Version 2: Sometimes issues are such that the simple switcheroo won't work too well in helping to focus our minds on the quality of arguments/evidence; so, we need a variation of the switcheroo to deal with those situations. Here it is: (a*) If an argument/evidence supports your pre-existing position, ask yourself if a similar argument or evidence would be convincing to you in an different issue to which you are opposed. (b*) If an argument/evidence is against your cherished beliefs, ask yourself if a similar argument would be convincing in an issue where you are a proponent.
Basically, in this version we're trying to generalize the principle that is being used to justify a conclusion then apply it to other cases to see if the principle is being applied consistently or (as is often the case) the principle is being used when it supports a conclusion we like but is being denied when it supports a conclusion we dislike.
Example (a*): Suppose you think homeopathy works and that you are generally skeptical of conventional medicine. To support homeopathy you cite a particular scientific study shows that 70% of subjects no longer had condition X after homeopathic treatment. The study has a sample size of 10 and there's no control group. Ask yourself, would such a study convince you that a new conventional medication was effective for condition X?
Of course not. A sample size of ten is way too small to conclude anything of consequence and the lack of a control group makes a study, especially of this size, essentially worthless. If the evidence in the second case wouldn't be good support the conclusion, then the same applies to the first case. Critical thinking secret v.2 allows you to see why the evidence you've provided isn't good.
Example (b*) Suppose again your pre-existing position is that global climate change is not caused by human activity. Someone points out that 97% of climate scientists think the opposite: that global climate change is attributable to human activity. Now apply critical thinking secret sauce v.2: pick an issue where you have a pro position or even one where you don't have position: suppose it's that it's consistent with the 2nd amendment that people should be able to own guns free of restrictions. Ask yourself: if 97% of all constitutional experts agreed that unrestricted gun ownership is consistent with the 2nd amendment, would you consider this to be a good reason in favor of your position? If yes, then you have to also allow that it's a good reason in the first case too.
Critical thinking sounds fancy but it's something most of us do everyday. In fact, most of us are quite good at it...so long as we are properly motivated. Unfortunately, we absolutely suck when the motivation is wrong--and I'm not talking about money.
What do I mean by all this talk of correct and incorrect motivation? What I mean is that, when someone is trying to argue against our own cherished position and beliefs, most of us are very good at pointing out the problems with our opponent's arguments. We are primarily motivated to want to be 'right' and to protect our most cherished beliefs from criticism and so, in such cases, our critical thinking skills are generally quite good.
We are absolutely horrible critical thinkers when an argument or evidence favors our existing cherished beliefs. Why? Because, as I said before, we are primarily motivated to be right and to protect our beliefs, thus, we will usually uncritically accept any argument or evidence that supports our position. Any argument or evidence that serves our purpose is, ipso facto, good. Consider: When was the last time you were critical of an argument or evidence that supported your own position on an issue?
What's the moral of the story here? Most of us already have a intuitive grasp of critical thinking, what messes it up is our motivation to be right and to protect pre-existing conclusions. So, if we want to be good critical thinkers, we need to manipulate our motives; or at the very least be aware of their capacity for distortion.
What is Critical Thinking?
To understand what good critical thinking is, it helps to contrast it with poor critical thinking. The way most people reason is that they look at the conclusion of an argument or the conclusion some evidence implies and assess whether that conclusion agrees with their pre-existing beliefs and positions. If the argument/evidence agrees with or supports their pre-existing position then the argument/evidence is considered good. If the converse is true, then the argument/evidence is considered defective. To summarize the problem: Most people focus on whether they agree or disagree with a conclusion rather than on the quality of the argument/evidence. This approach is not good critical thinking.
In critical thinking we don't care two hoots whether we agree or disagree with the conclusion: all we are interested in is whether the argument or the evidence is good support the conclusion. Critical thinking is mainly about two things: (a) standards of evidence (i.e., what constitutes good evidence?) and the logical relationship and relevance of premises of arguments to their respective conclusions (i.e., does the conclusion follow from the premises?). That's all. The end. Good night. (For convenience, I'll refer to both of these aspects as "quality of evidence/arguments").
The Secret that Professors Don't Want You to Know:
Good critical thinking is all about focusing on the quality of the arguments/evidence relative to conclusions but unfortunately our brains are hardwired to look at the conclusions relative to our pre-existing beliefs. Because we should really focus on the quality of arguments/evidence, we need a trick to overcome the tendency to focus on the conclusion.
The Ol' Switcheroo Version 1: (a) If an argument or evidence supports your position, ask yourself if you'd find the argument/evidence compelling if the same quality of evidence/justification supported the opposite conclusion. (b) If an argument/evidence is against your current position, ask yourself if you'd find the quality of evidence/justification compelling if it supported your position.
For example (a): Suppose you think vaccines cause autism and to support your conclusion you cite the fact that your nephew has autism and he was vaccinated; therefore, vaccines cause autism. To apply critical thinking special secret #1 we construct a similar argument but for the opposite conclusion: E.g., I have a nephew and he was vaccinated and isn't autistic; therefore, vaccines don't cause autism.
If your original position was that vaccines cause autism, would this second argument cause you to change your position? Nope, I doubt it would, and for good reason: a single case doesn't tell us anything about causal relations. Notice that applying secret thinking sauce #1 allows us to focus on the quality on the evidence rather than on whether we like the conclusion. So, if the second argument fails as good support for the conclusion, so does the first, even though it supports your position. Boom! goes the dynamite.
Lets try another example (b): Suppose you are an anthropogenic climate change denier. Someone argues against your cherished beliefs by saying 97% of climate scientists agree that human activity is responsible for climate change. Your natural reaction is to discount this as an insignificant argument because it contradicts your pre-existing position. Now apply critical thinking secret sauce #1 and ask yourself: If 97% of climate change scientists denied that human activity has any effect on the climate, would you consider this as good support for your position?
Lets try a moral example: In the "homosexuality is bad vs homosexuality isn't bad" debate both sides often make appeals to what is or isn't natural behavior as justification for their position. Lets apply critical thinking secret sauce to both sides to show why both justifications are weak:
"Homosexuality is morally wrong because it's unnatural." The justification here is that moral wrongness is a function of whether something is unnatural. Now, applying the ol' switcheroo, we ask the person who takes this position: Supposing homosexuality were natural, would you then agree that homosexuality is morally permissible? They will likely answer, "no" thereby indicating that naturalness is a poor justification for moral permissibility.
But it isn't just evangelical moralists that are using poor justifications for their claim. Lets apply the same test to those who argue that homosexuality is morally permissible because it is natural for a certain percentage of the population to be gay (usually some sort of genetic argument is given). Lets try applying the ol' switcheroo:
Suppose scientists discover that there is no "gay gene" and that homosexual behavior is purely a matter of some combination of socialization and personal choice. If this were the case, would proponents of the argument then say "welp, I guess homosexuality is morally wrong after all"? Probably not. And the reason is that whether a behavior is natural or not tells us nothing about that behaviors moral status.
Whatever one's opinion on the moral status of homosexuality, the ol' switcheroo shows us that both positions cannot be supported through appeals to "naturalness". That is, the quality of that particular justification is weak regardless of which conclusion we are sympathetic to.
The Ol' Switcheroo Version 2: Sometimes issues are such that the simple switcheroo won't work too well in helping to focus our minds on the quality of arguments/evidence; so, we need a variation of the switcheroo to deal with those situations. Here it is: (a*) If an argument/evidence supports your pre-existing position, ask yourself if a similar argument or evidence would be convincing to you in an different issue to which you are opposed. (b*) If an argument/evidence is against your cherished beliefs, ask yourself if a similar argument would be convincing in an issue where you are a proponent.
Basically, in this version we're trying to generalize the principle that is being used to justify a conclusion then apply it to other cases to see if the principle is being applied consistently or (as is often the case) the principle is being used when it supports a conclusion we like but is being denied when it supports a conclusion we dislike.
Example (a*): Suppose you think homeopathy works and that you are generally skeptical of conventional medicine. To support homeopathy you cite a particular scientific study shows that 70% of subjects no longer had condition X after homeopathic treatment. The study has a sample size of 10 and there's no control group. Ask yourself, would such a study convince you that a new conventional medication was effective for condition X?
Of course not. A sample size of ten is way too small to conclude anything of consequence and the lack of a control group makes a study, especially of this size, essentially worthless. If the evidence in the second case wouldn't be good support the conclusion, then the same applies to the first case. Critical thinking secret v.2 allows you to see why the evidence you've provided isn't good.
Example (b*) Suppose again your pre-existing position is that global climate change is not caused by human activity. Someone points out that 97% of climate scientists think the opposite: that global climate change is attributable to human activity. Now apply critical thinking secret sauce v.2: pick an issue where you have a pro position or even one where you don't have position: suppose it's that it's consistent with the 2nd amendment that people should be able to own guns free of restrictions. Ask yourself: if 97% of all constitutional experts agreed that unrestricted gun ownership is consistent with the 2nd amendment, would you consider this to be a good reason in favor of your position? If yes, then you have to also allow that it's a good reason in the first case too.
Labels:
critical thinking,
general rules,
how to,
overview,
rules of thumb,
summary
Subscribe to:
Posts (Atom)