Wednesday, February 27, 2013

Critical Thinking: Extended Arguments and Inference Indicators

Introduction
Up until now we've been applying our analytical skills to relatively simple arguments.  Now we will begin to apply those skills to extended arguments.  What's an extended argument?  Well, I'm glad you asked:  An extended argument is one that has a main conclusion supported by premises which themselves are in turn supported by sub-premises.  

When a major premise is supported by sub-premises we can consider the major premise to be sub-conclusion.  Extended arguments are often more difficult to break down into premises and conclusion because there's a lot more information involved.  Also, sometimes it can be difficult to disentangle the sub-premises from the premises and the final conclusion from the sub-conclusion(s).  

Some General Strategies As a general heuristic, work backwards from conclusion to premises to sub-premises.  First, try to identify the main conclusion.  A good way to go about it is to ask yourself, "what is the argument trying to convince me of?"  If all else fails, look at the title of the article...

Once you answer that question ask yourself "why does the arguer think I should believe this?" This will help you identify the main premises. What's left will often be sub-premises. 

In some extended arguments, once we've identified the main components it can still be difficult to distinguish what is supporting what--especially between a sub-conclusion and the main conclusion.  Here's a little trick to help make the distinction.  

Suppose you have 2 statements and you're not sure which is the main conclusion and which is a sub-conclusion.  Read one statement followed by "therefore" then read the next.  If it sounds awkward, try it the other way around.  Often, this can help sort things out.  

Extended Arguments:  Argument Extend-a-Mix
So, why should we care about extended arguments?  There are a couple of reasons.  First, most arguments we encounter "in the wild" as articles, essays, and books come to us as extended arguments.  Second, as you may have noticed, the premises of simple arguments don't always withstand scrutiny.  

This implies that if an arguer wishes to maintain her position against criticism, she will have to provide further sub-premises (i.e., reasons and evidence) to support the premises which are being criticized.  A good arguer will anticipate criticism and so will include the sub-premises as a pre-emptive defensive strike.  

Lets look at an example to illustrate what I'm talking about: 

Sample Simple Argument:
P1  Mugatu invented the pianokey necktie.
P2  The pianokey necktie was an important milestone in men's fashion.
C    Therefore, Mugatu is a fashion genius. 

Suppose someone takes issue with the premise acceptability of P2.  (Of course, they'd be wrong, but just suppose....) The person making this argument would then have to give further premises (reasons or evidence) to support P2.  For example, they might say "all the cool kids owned one."  The fact that all the cool kids owned a pianokey necktie further supports the premise that the pianokey necktie was an important milestone in men's fashion. 

The extended version of the argument would look like this:
P1  Mugatu invented the pianokey necktie.
P2  All the cool kids owned pianokey neckties.
P3  Given that P2, the pianokey necktie was an important milestone in men's fashion.
C   Therefore, Mugatu is a fashion genius.

Analyzing Extended Arguments Using Inference Indicators
As I mentioned earlier, a problem with analyzing extended arguments is trying to distinguish between premises, sub-premises, and conclusion.  But do not dispair fair child.  There are yet more tricks to help us.  

Paying attention to inference indicators will often help us to disentangle argument components.  An inference indicator is a word that gives us a sign as to whether the sentence is a premise or a conclusion.

Here are some common indicators for premises:  Since, because, for, as can be deduced from, given that, and the reasons are.

Here are some common indicators for conclusions:  Consequentially, so it follows, thus, hence, therefore, and we conclude that.

Now go forth and analyze. 

Monday, February 25, 2013

Critical Thinking: Informal Fallacies Part 1: Red Herring and Straw Man

Introduction
In the last post we looked at the properties of a strong argument: (a) premise acceptability and (b) logical force (i.e., validity).  The concept of validity can be further sub-divided into two components:  (i) premise relevance and (ii) premise sufficiency.  Now we're going to look at the dark side of arguments:  fallacies.   Fallacies are intentional or unintentional (common) mistakes in argument.

There are many different types of fallacies but the two that we will look at here have to do with how premises relate to the context of an argument.  They are the red herring and the straw man.  Both fallacies can be either intentional or unintentional.

Red Herring
A red herring is "an attempt to shift debate away from the issue that is the topic of an argument" (Groarke & Tindale; p. 66).   Basically, a red herring is an objection to a position that doesn't address the actual argument.  Its premises are irrelevant to the conclusion it seeks to negate/oppose.

Lets look at an example from Plato's Republic:

Socrates:  Rebecca Black is such a great singer.  Her voice is a combination of Jesus and Fergie.
Glaucon:  Whatev, her voice is auto-tuned.  If it weren't or if she were singing live you'd hear that she's out of tune.  Therefore, she is not a great singer.
Socrates:  Why do you hate her? OMG, you're so mean!

Glaucon's argument is that Rebecca Black's voice isn't very good, and he provides reasons.  Instead of replying to Glaucon's argument by addressing his premises or reasoning, Socrates brings up an issue irrelevant to the argument.  In short, Socrates' premises are not relevant to the conclusion he's trying to support, that Rebecca Black is a great singer.   That is to say, Glaucon's opinion of Rebecca as a person has no bearing on whether she's a good singer or not--regardless of what day of the week it is.

The red herring fallacy has many cousins and sub-species which we'll examine later in the course.  Some of them you may have heard of:  non-sequitur, ad hominem, and tu quoque.  When you use the Latin names you can really impress your friends...yay!

Straw Man
The straw man argument is similar to the red herring in that it doesn't address the actual argument.  It differs in that a straw man doesn't address the opposing argument because it misrepresents or distorts it.  A straw man argument often contains a grain of truth, but the opposing position is so blown out of proportion it is hardly recognizable.   The general purpose of a straw man argument is to present an opponents position in a way that makes it seems ridiculous, weak, and obviously wrong.

A great source for straw man arguments is any heavily biased news source.  Sentiments like "Obama's going to take all our guns" is a straw man argument against proposed gun control legislation.  While there may be some truth in that the proposed legislation seeks to ban assault weapons, there is no part of the bill that requires all gun owners to turn in every type of gun they own.   Conversely, proponents of gun-control legislation might make a straw man out of the legislation's opponents by arguing that pro-gun people don't want any restrictions at all on gun ownership and types of ownership.

From the point of view of critical thinking there are a few important points to notice:  (a)  The straw man gun control arguments on both sides distorts the opponent's position such that its actual content isn't being addressed, (b) because the opposing argument is distorted it seems ridiculous and easy to refute, and (c) because the actual content isn't being addressed, the topic of the argument gets shifted away from the actual premises rendering difficult meaningful dialogue.

Hotly debated topics are fertile ground for straw man arguments.  For good examples read the comments section for any article on GMO, nuclear power, natural gas, gun control, health care (in the US), immigration policy, and public policy regarding religion.  

Fallacy Fest! (It's not what you Think....)




It's my lucky day.  This just popped into my newsfeed.  Can you find the red herring and the straw man arguments?  (Both are conveniently contained in one meme!)  Note that pointing out the logical fallacies has nothing to do with whether you agree or not with the conclusion/point of view.  It is only an evaluation of how an arguer arrived at a particular point of view.  

As I'm mentioned before, it is perfectly possible to give terrible arguments for a true conclusion. As critical thinkers we seek to separate our analysis of the argument from our approval/disproval and truth/falsity of the conclusion.

Ok, I can't help myself.  The people who made this meme are so scientifically illiterate that they list creatine as an artificial sweetener.  Good lord...  (Bonus, what logical fallacy did I just commit?)


Wednesday, February 20, 2013

Critical Thinking: Evaluating Logical Strength of Deductive and Inductive Arguments through Relevance and Sufficiency

Introduction
In the previous post we talked about logical force or logical consequence (they are interchangeable).  These terms refer to the degree to which we must accept the conclusion if we've assumed the premises to be true.  When an argument has maximum logical force we say it is valid.  Generally, there are two types of logical force: deductive and inductive.

Deductive Validity
Deductive validity means that if we accept all the premises as true, we must accept the conclusion as true.  Otherwise stated, in a deductive argument, the conclusion necessarily follows from the premises.  Think of deductive arguments as something akin to math.  If you are told x=2 and y=3 and you are adding, then you must accept 5 as the answer.   Here are a couple very simple examples to illustrate the principle:

Sample A
P1.  Bob is a man.
P2.  Bob likes turtles.
C. /.: Bob is a man that likes turtles.

Sample B
P1.  If it's raining, there are clouds.
P2.  It's raining.
C /.:  There are clouds.

Sample C
P1.   Either all dogs are pink or dogs are blue.
P2.   Dogs are not pink.
C /.:  Dogs are blue

These examples may seem trivial but I want them to be simple in order to illustrate a point about how to evaluate an argument for deductive validity.  Here's what you do:  you suppose that all the premises are true--even if they aren't--and then you assess logical force; in other words, whether you are now forced to accept the conclusion.

Take Sample B.  P2 says "It's raining".  But suppose it isn't raining and you are asked to evaluate the logical force of the argument.  What would you say?  The correct answer is that it is deductively valid.  Why?  Because it don't make no gosh darn difference if the premises are true or not when we evaluate logical force.  All we care about is if we'd have to accept the conclusion if the premises were all true.

Lets do one more.  Look at Sample C.  All of the premises are empirically false.  Now suppose you are asked to evaluate the validity of the argument.  What would you say?  Valid or invalid?  Your answer should be that the answer is valid.  Again, when assessing validity we don't care a hoot about whether the premises are true or false or ridiculous.  All we care about evaluating is whether we are logically forced to accept the conclusion if all the premises are true.

Now lets look and inductive validity

Inductive Validity
An inductively valid argument is one in which the conclusion doesn't necessarily follow from the premises, but the premises make the truth of conclusion likely.  Inductive validity has to do with probability of truth, not certainty.  Inductive arguments can take several forms and are most commonly (but not exclusively) found in science.  Here are a couple samples to illustrate:

P1.  I've seen one raven in my life and it was black.
P2.  I've seen two ravens in my life and it was also black.
.
.
.
.
Pn.  I'm on my death bed and every got-tam raven I've ever seen is black.
C /.:  All ravens are black.

Notice that regardless of how many premises I have, I'm not forced to conclude that all ravens are black.  Maybe the day after I die, a white raven flys by my room.  Or maybe in some remote part of the world someone saw a white raven but didn't tell anyone else about it.   White ravens might be possible which would negate the conclusion despite all the premises being true.  The point is this: with an inductive argument it's possible that the conclusion is false even though all the premises are true.

Notice however that given the premises, our conclusion in this example is extremely probable, so we'd say the argument is inductively valid.

Lets do one more example:

P1.  A significant portion of the US population feel strongly about maintaining the right to bare arms and are politically active.
C.  /.: If the government proposes legislation banning short-sleeved shirts, this portion of the population will react strongly and challenge the law's legality.

This is an inductive argument because the conclusion doesn't necessarily follow.  It might be the case that the bare-arm rights group doesn't challenge the law.  However, given the supporting information in P1 we can say that the conclusion is highly probable.  In this this case, we would say that the argument is inductively valid because the logical connection between the premises and the conclusion is strong.

Relevance and Sufficiency
Inductive and deductive validity are lots of fun, but there's more to the logic party than that!  When we evaluate validity we are essentially evaluation the relationship between the premises and the conclusion.  In the previous post we talked about assessing the premises in terms of acceptability; i.e., how reasonable or plausible they are.  But when we are assessing validity we also need to look at the premises from another point of view.

Can you guess what it is?  Did you guess "if they are true or not?"  I hope you didn't cuz that would be wrong.  Recall (and I will repeat this as often as necessary) validity has nothing to do with evaluating the truth or acceptability of premises; when we evaluate validity we automatically assume the premises are true...remember? Good.

Relevance
Ok, back to our discussion of other ways to evaluate logical force.  We can decompose logical force into two separate elements:  relevance and sufficiency.  Logical relevance is the degree to which the premises increase the likelihood of the conclusion being true.  For example,

Sample D (Inductive Arg)
P1  Bob likes cheese
P2  Bob likes ice cream
P3  Bob likes milk
P4  Bob likes sour cream
P5  Bob likes turtles
C /.:  Bob likes dairy products

We can ask of each of the premises in Sample D if the premises support the conclusion (or the degree to which they support it).  In other words, we can ask how relevant each of the premises are to the conclusion.  Our aggregate evaluation of each will bear on our assessment of the argument's overall logical force.

In Sample D we can say that P1-P4 are relevant to the conclusion but P5 is not.  However, in this case, P5 doesn't diminish the strength of the argument.  The logical force doesn't change whether P5 is there or not.  In some arguments, however, the (ir)relevance of the premises will bear on the logical force of the conclusion.

Consider another argument:

Sample Argument E

P1  I like turtles
P2  My shoes are black
C  The chemical composition of water is H2O.

What is the logical force of this argument?  In Sample E the premises are not relevant to the conclusion yet the conclusion is true.  What should we say?  Here's what:  it doesn't matter one fig that the conclusion is true when we are evaluating an argument for logical validity.  Recall that in this phase of evaluation, we assume all premises to be true.  So, lets do that.  Now, to assess logical validity we next look at the relevance of the premises to the conclusion.  Are the premises relevant to the conclusion?  I.e., do they increase the likelihood that the conclusion is true?  Nope. Therefore this argument is logically invalid.

But, you cry (tears streaming down your face), the conclusion is true!  Yeah, I know, but as you should well know by now, when assessing validity (logical force) we don't care two hoots about truth.  Alz we care about is the logical relationship between premises and conclusion--in this case, relevance.

Final note on relevance: When you evaluate an argument for relevance you have to evaluate each premise individually. Why?  Because some of the premises might be relevant while others aren't.  You can't treat them as all relevant or all irrelevant until you've looked at each one. 

Sufficiency
Unlike relevance, we don't evaluate the sufficiency of each premise, we evaluate the sufficiency of the combined force of the premises.  Sufficiency refers to the degree to which the stated premises give us enough information to accept the conclusion as true or highly likely.  In other words,  since we can't know every relevant fact in the world (past, present, and future), are the facts contained in the premises enough on their own without any further reasons or evidence for us to reasonably accept the conclusion?   Think of sufficiency as the "enough-ness" of the total evidence presented for the conclusion.  

As you might expect, because sufficiency is about the logical relationship between premises and conclusion when we evaluate sufficiency we are assuming the premises are true.  We ask, given that all these premises are true, is this enough information on its own to force us to accept the conclusion?  I.e., is there a way for the premises to all be true, yet the conclusion false?  

Lets look at an example:

P1  Children are generally diagnosed with autism 6 months to a year after they get the vaccination for MMR.
C  /.: Therefore, the MMR vaccine causes autism.

Is P1 sufficient to accept C?  How do we evaluate this? We can approach this problem a couple of ways.  In all of them, begin by assuming P1 is true.  
Heuristic 1:   Ask yourself, does P1, on its own, guarantee the truth of C?  
Heuristic 2:  Counter-examples:  A counter example is a case where all the stated premises are true but the conclusion turns out to be false.  To construct a counter-example you try to find additional facts, reasons or evidence that would make it so the stated premises stay true but the conclusion is false or unlikely.  So... ask yourself if there any facts that would allow us to continue accepting P1 as true yet would led us to a conclusion that implies C is false.

Consider this:  The time in at which children are diagnosed with autism is the same time which important developmental changes take place in children's brains.  Due to genetic and environmental factors, these changes can manifest as autism--regardless of vaccine administration.  That is, the symptoms of autism become most easily diagnosable at the same time vaccines are typically administered--regardless of whether you actually do administer the vaccines.  

This information allows us to continue to accept P1 as true, yet conclude something different (I.e., autism naturally manifests or become easily diagnosable at the same time children get MMR shots).  So, P1 on its own is not sufficient for accepting C.  So, we'd say the premises are not sufficient to accept the conclusion and therefore, the logical force of the (inductive) argument is weak.

Consider one more example:

P1  There are clouds
C   /.: It's raining

Is P1, if true, sufficient to accept C?  No, because it's possible for P1 to be true and for C to be false.  That is, it can be cloudy without raining.  Again, we'd say the premises are not sufficient to accept the conclusion and therefore, the logical force of the (inductive) argument is weak.

Summary:
We can evaluate validity (i.e., logical force) from a couple of points of view, however in all of them we assume the premises to be true.  These points of view, when combined, contribute to our total assessment of the logical force of a particular argument.

One way to distinguish types of validity is according to whether the argument is deductive or inductive.  In a deductive argument, if the premises are assumed to be true you must also accept the conclusion as true (no matter how outrageous it is and even if the premises are actually false).

With an inductive argument, validity is a matter of degree. We evaluate the degree of logical strength by assuming the premises to be true and deciding whether this is compatible with the conclusion being false.  If it is unlikely that the conclusion is false then the logical strength is strong.  If there are many other likely conclusions to the argument that we could accept without questioning the truth of the premises, then the logical strength is weak.

We can further decompose the notion of logical strength (i.e., validity) into two sub-elements:  premise relevance and sufficiency.  When we evaluate relevance we assume the premises are true and assess how whether they impact the likelihood of the conclusion being true.  In other words, we look at how well each particular premise supports the conclusion. 

Sufficiency refers to whether the premises, when taken in toto are enough on their own to guarantee the truth of the conclusion (if we assume the premises to be true).  One way to test for sufficiency is to try to come up with counter examples, that is, cases that bring in additional premises, but preserve the truth of the existing premises, and show that a different conclusion could follow from all the new premises.  A counter example shows that there's other relevant information out there that might allow us to accept the premises as true, yet reject the conclusion.

Finally, when you are asked to assess logical force/validity/strength/consequence understand that this evaluation is made up of two separate criteria (i.e., to be evaluated independently of each other):  relevance and sufficiency.

When you give your final assessment of an argument's strength, refer to both aspects (as well as premise acceptability).


















Tuesday, February 19, 2013

Thoughts on Ethical Eating, "Healthfood" Stores, and Sustainable Food


Part 1:  I'm a Ramblin' Man
Ok, so maybe I ought to put "ethical" from 'ethical eating' in bunny ears.  I'm still eating meat but, as I will explain,  I am making an effort to make sure that the meat I eat is humanely "produced" (?)--and I'm not eating bunny ears.  Man, I can't even write a got-tam sentence without qualifications and doubts.  Ok, how about this:  I'm just going to start writing and I'll have to qualify some stuff later otherwise nothing's going to get done, and ain't nobody got time for a post about nothin'.

First of all, as you may have guessed, I have serious doubts about whether it's ethical to eat meat at all--regardless of how it was raised.  This is why for that last 4 or 5 years, I've been intentionally avoiding reading on the topic.  Ignorance is bliss.  Well, that and the fact that I need to eat meat to maintain my lifestyle.  There it is. The heart of all arguments for eating meat: selfishness and willful ignorance.  

In the past I thought perhaps I could justify my meat-eating because my income depends on having a particular body-type.  That is to say, my income for the last 6 years has been directly related to having a muscular, lean body.  That body-type requires high protein-low carb diet virtually impossible to obtain on a vegetarian diet.  You might be able to get enough protein by eating a ton o' beans but beans also contain a lot carbs; and so to get the protein, I'd have to take in a lot calories I don't need.   And we all know what happens to unused calories.  Bye bye rector abdominalis

So, in the past my argument was something like this: since my livelihood depends on a diet that requires meat, it is morally permissible for me to eat meat.  

But here's the problem.  Once you start to become sympathetic to the idea that eating meat is immoral, my argument doesn't work anymore.  For example, suppose your livelihood depended on eating the body parts of humans.  Nobody in their right mind would give this a pass as justification for eating humans.  The normal response would be, well, you should probably find another way to make a living.  It's not like anyone's putting a gun to my head to make me do my job.

Then there's the further problem of "humanely" raised (?) produced (?) meat.  If you find intuitive appeal in the moral principle that we ought not to kill sentient life when it is not necessary to do so, then it doen't seem to matter that animals are humanely or inhumanely raised for meat.  Nobody--except (perhaps) a few very rare medical cases--needs to eat meat.  Sure, a lot of gym rats might lose some inches on their biceptors but do they need the extra inches? (On their arm...)  I mean, is the need so great that a sentient being has to lose its life?  Do hundreds of chickens a year per gym rat need to die so someone can have bigger pecs?  How do you justify that in any intelligible way? 

One might respond that if the animals are given a good life before the slaughter, then it's ok.  But I'm still not convinced by this argument.  Is the satisfaction of our selfish unnecessary human desires so important as to make permissible the killing of other sentient life?  

Anyway, I don't want to dwell too much on the arguments against eating meat; in this post, I mostly want to recount what it's like to make the transition to shopping at the "healthfood" supermarket.

Part 2: "Healthfood" Stores
O.                                   M.                          G.    Where to begin?  Alright...machine gun approach.  I feel like I just walked into an alternate universe where pseudo-science reigns supreme.  Homeopathy?  There's a whole section of the store devoted to it.  It boggles the mind that in this day and age, with modern scientific research readily available on the intertubes that people fall for this stuff.  

But the inundation of pseudoscience isn't by biggest gripe.  My biggest gripe is that the majority of the food that is prominently displayed isn't healthful at all.  The amount of "all natural" candy, jube jubes, potato chips (they're organic!), chocolate covered almonds (get your anti-oxidants here!), soft drinks, etc... makes me want to scream.  

Here's the deali-yo.  All I ever hear from the eat-natural, occupy-food movement people is how the big (bad) evil food corporations are pushing unhealthy choices on consumers.  Now, I'm not disagreeing.  In fact, I totally agree.   But the health food stores are worse!  

These denizens of deceit are not only pushing unhealthy food choices like a US pharmaceutical company pushes drugs but they are doing it while gleefully announcing how healthful and "natural" these foods are.  The labels are a study in shameless hucksterism (is there any other kind?) and hypocrisy. 

Candied acai berries scream, "boost your immune system!"  Chocolate covered everything bellows, "no preservatives" as though the fact that you are eating 1000 calories is totally negated by this claim.  Soft drink labels brag about "real sugar" as though this again compensates for the unneeded calories and lack of nutrition.  The assortment of gummy candies is bewildering--but they're fat free! I don't even know what to say.  Not a thing in the store is sold without some sort of outrageous health claim.  

This might not be so egregious if all the junk food were tucked away in some aisle at the end.  But where is all this stuff displayed?  Right in the most prominent places in the store...and not only that, but the bulk candy section is proportionally bigger than the candy section you'd find in a regular supermarket.

When I go through the check out, I look at what other people have bought.  It's totally anecdotal but for what it's worth you don't see food choices any better or worse than you would see in a regular supermarket.  Some people have by-passed the devilishly advertised junkfood and opted for small amounts of (over-priced) organic fruits and veggies, while others have fallen prey.  

It's a travesty of a mockery of a sham.

Part 3:  Bring it Back Around Again
Ok, one last complaint, bringing this back to the original theme of the post:  "humanely" raised meat.  It's twice the price of regular meat.  Don't get me wrong.  I expect to pay a little more for a product that isn't factory produced.  But double?  

They want $7.00 for enough shaved turkey to make maybe 2 sandwiches.  I used to buy the premium sandwich meat with no preservatives or hormones etc.. (but not humanely raised to my knowledge) and that was $4.00 and I got about 3 sandwiches out of it.  That's a real difference over the course of a year.

I've read stuff in the past on the price of organic fruits of vegetables being inflated because retailers know consumers are willing to pay it.  But this does a huge disservice to the (ostensible) primary mission of the sustainable food movement.  People who might otherwise have made ethical food choices are priced out of the market just so youz guys can make fat profits.  This is starting to sound like a familiar story...

Now, I'm sure that not every organic food provider (wholesaler or retailer) is gouging customers.  Also, my research on the matter is spotty at best.  However, suppose there's some truth to this idea that organic food prices (the food not the prices) are inflated.  Suppose that organic food producers know that people who make their food choices based on ethics are willing to pay a premium.  (Maybe it's a conspiracy!...Whoa! Dude!)  It wouldn't be too outlandish if there were at least some truth to this supposition at some level of the distribution chain.


Ok, so if we grant that supposition, it seems that at least some elements of the sustainable food movement are an imposition to its primary goal (world domination).

Fat-Free Conclusion (that also cures cancer)
So, where am I going with all this?  We've got ethical quandaries with eating meat, a hypocritical "health" food industry, and possible (but likely) price over-inflation.  Answer, I don't know.  I do know that I support the ostensible ideological aims of the sustainable food movement but I am disappointed by what I done seen in the healthfood store.  

My disappointment probably stems from the fact that I'm probably still too much of a got-tam idealist.  You'd think the world would have squashed it out of me by now...

Possible reply:  Health food stores aren't really part of the sustainable food movement.  They are just one more example of capitalism finding a market and exploiting it.  True sustainable food shopping requires going to farmers markets; i.e., buying directly from producers.  

Possibly.  I think there may be some truth there.  However, (a) this reeks a little bit of the "no-true-Scotman" fallacy and (b) there are plausible economic arguments against the environmental impact of small scale production and movement of goods versus having these things done on a mass scale.  Anyone who's taken more than a few weeks of Econ can tell you about economies of scale... 

Meh...I'm done with this post.  Gimmi some feedback.  Sustainable eating is something I'm rasslin' with right now.   I'm open to suggestion. 

Thursday, February 14, 2013

Critical Thinking: Burden of Proof, Strong Arguments, and How to Criticize.

Introduction
Up until now we've spent quite a bit of time looking at the role of biases in argument.  Understanding how they influence arguers and our perception of arguments is important; however we're now going to move beyond the psychological aspects of analysis (tell me about your mother...) and start to hone our technical skills.

The first part of our technical analysis involves evaluating whether an argument is strong or weak.  A strong argument is one that is convincing for its audience and tough to criticize for its opponents.  A weak argument is, well, one that isn't very convincing and easy to criticize.  Of course, most arguments are not 100% one or the other, but inhabit a space on the continuum between the two types.

Hopefully, as we learn to recognize the elements of a strong argument, we will learn to incorporate them into our own arguments.

How Do We Evaluate An Argument's Strength?
One thing we can look at to evaluate an argument's strength is who should bear the burden of proof.  In simple terms, burden of proof refers to the person an intended audience thinks has to provide an argument for their claim.  Before I formally define this term, lets take a step back.  Recall that arguments can be decomposed into premises and conclusion(s).  A burden of proof can concern the premises or the conclusion, however, lets first focus on the concept as it applies to conclusions.

When we evaluate an argument for burden of proof we are essentially asking if its conclusion is reasonable.  That is to say, is it something that most reasonable people (in the intended audience) would accept as true.  If the assertion is reasonable, then the opponent bears the burden of proof to show that we should not accept the assertion. If the assertion is unreasonable, then the arguer bears the burden of proof to show (with further supporting premises) why we ought to accept the particular assertion.  

When the arguer's conclusion isn't reasonable (i.e., when the burden of proof falls upon the arguer's conclusion), an argument must be made!  That is, he's now going to have to back up his conclusion with premises.  If, in turn, any of the premises are considered unreasonable, then they too will have to be backed up with further premises.  That is, he will also bear the burden of proof to support those premises.

Now we can give a formal definition:  A burden of proof speaks to reasonableness of an assertion (be it a conclusion or a premise); the person who opposes whatever is considered reasonable bears the burden of proof--that is, it's up to them to convince us (through argument) that the default position is unreasonable or incorrect.  Without a supporting argument, we have no good reason to take their point of view seriously.

Lets look at a few examples to illustrate:
When people deny that the moon landing happened, the burden of proof is on them.  They are taking a position against all experts and mountains of physical evidence.  The burden falls upon them to show why we should reject the reasonable position of thinking people landed on the moon.  The reasonable position is that people landed on the moon; to assume otherwise would require further argument.

When people say that the earth is only 6 000 years old, the burden of proof falls upon them.  It's up to them to show why multiple converging lines of evidence are mistaken in their implications and why the theory upon which modern geology and biology are founded is incorrect.  It's reasonable to think that virtually all geologists are well qualified to determine what theories do or do not apply to to the age of the earth.  To assume a claim that implies that virtually all geologists are wrong requires further argument.

One last note on burdens of proof (laaaaaaaa!):
Historically, burdens of proof can shift.  So, what was a reasonable assumption a few hundred years ago might be unreasonable today.  We see this with social assumptions.  For example, it wasn't too long ago that it was reasonable (for men) to assume that women weren't capable of math and science.  Someone (back then) assuming the opposite would bear the burden of proof.  Now, that burden of proof has shifted.  

Economics is one area where the burden of proof is shifting.  It used to be the common assumption that humans are (classically) rational--always seeking to maximize personal interest along the lines of classical mathematical rules.  Behavioural economics, interdisciplinary psycho-economics, and socio-economic theory are starting to show these assumptions are wrong.  Giving this mounting empirical evidence, the burden of proof is shifting concerning economic models built upon the assumption of (classically) rational agents.

Notice that when burdens of proof shift, it often has to do with accumulation of evidence (and reasons).  So, maybe in the future we will discover mountains of evidence that the moon landing was a hoax and that the earth is 6000 years old.  If this happens the burden of proof will shift.

Argument Jiu Jitsu
When constructing a strong argument, whenever possible try keep the burden of proof on your opponent.  Hai-ya! 

Premise Acceptability
Premise acceptability is closely related to issues discussed in burden of proof.  Premise acceptability is the degree to which the intended audience for the argument will accept the premises as reasonable.  In other words, its an evaluation of how acceptable the premises will be to a particular audience.  As I've mentioned a few times already, no matter how air-tight your logical progression from premises to conclusion, if your audience doesn't accept your premises at the start, they'll never accept your conclusion.  This is a problem because your conclusion is dependent upon your audience accepting your premises.

Think of it this way: A strong argument merrily leads your audience down the garden path to your conclusion.  If they never take your hand in the beginning, they'll never skip along the garden path with you to your glorious conclusion!

The lesson here?  (1) When constructing an argument, do your best to make sure the premises are acceptable to your audience.  (2) As a critical thinker examining another's argument, ask yourself of each premise if it will be considered reasonable by the standards of your audience.  

Logical Consequence (or Logical Force)
Logical consequence or force is the degree to which we are "forced" to accept the conclusion if we've accepted the premises.  When we evaluate an argument for logical force, as much as we can, we want to separate this evaluation from the acceptability of the premises.  To do this we can ask, "assuming all the premises are true, am I forced to accept the conclusion."  Asking this question helps to disentangle the two criteria.

A strong logical argument would be something like this:
(P1)  All cats have 4 legs.
(P2)  Bob is a cat.
(C)   Bob has 4 legs.

If I accept (P1) and (P2), I'm logically forced to accept (C).

A weaker logical argument would be something like this:
(P3)  Every time I eat fish, I don't get sick.
(C2)  Fish causes me to be healthy.

This logic in this argument is a little weaker for a bunch possible reasons--here are a few: (a) Perhaps I don't eat fish by itself, so maybe it's something else that keeps me healthy--like the tartar sauce I always eat with my fish; (b) maybe it's just dumb luck that the few days following eating fish I haven't happened to get sick; or (c) maybe I only eat fish when I'm already feeling good.  For anyone keeping track, this is called the "post hoc, ergo proptor hoc" fallacy.  It means, "after, therefore, because of."  Or colloquially, "confusing correlation with causation."

It's not a logical impossibility, but the logical connection between the premises and the conclusion is weak.  (P3) doesn't compel me to accept (C2).  I can accept (P3) without accepting (C2).  However, in the first example, if I accept (P1) and (P2),  I must also accept (C) or I risk being arrested by the logic police.

Conclusion
As always we can apply this new information in two ways:  (1)  As critical thinkers criticizing an argument or (2) as clever scholars constructing our own arguments.  In both cases we need to be cognizant of the following:

(A)  When the arguer's conclusion is unreasonable, he bears the burden of proof to give an argument for why the audience should accept it.  (Same goes for the conclusion's critic)
(B)  A strong argument will have premises that are accepted as reasonably true by the audience.
(C)  A strong argument will compel us through logical force to accept its conclusion if we have accepted its premises.

As critical thinker we should ask of all arguments:
(D)  Who bears the burden of proof?
(E)  How acceptable (i.e., reasonably true) are the premises?
(F)  To what degree does the conclusion necessarily follow from the premises?






Monday, February 11, 2013

Critical Thinking: BS Detectors on Stun. How to Detect Illegitimate Biases

Introduction
In this section we are going to start learn how to detect BS.  Lets move beyond the general notion of 'bias' and get more specific about biases and how the affect the strength and validity of arguments.  Recall that one way we can classify biases is according to how much skin the arguer has in the game; that is, the degree to which the arguer stands to gain from his audience accepting his position.  In this respect we can make 3 broad categories of bias: legitimate, illegitimate, and conflict of interest.  By now you should be able to say something about each type.  Moving on...

Confirmation Bias:
Another way to classify bias in an argument is according to how the information is presented.  One of the most familiar biases is confirmation bias.  Confirmation bias is when we only report the "hits" and ignore the "misses"; in other words, we only include information/evidence/reasons in our argument that support our position and we ignore information that disconfirms.  Confirmation bias is often (but not always) unintentional and everyone does it to some degree (except me).

What?  You don't think you do?  Oh, I get it.  You're special.  Ok, smarty pants.  Here's a test.  Lets see how smart you are.  And don't forget you've already been give fair warning of what's going to happen. The smart money says you will still fall into the trap.

Click on this link and do the test before you continue:
http://hosted.xamai.ca/confbias/index.php
.
.
.
.
.
I said do the test first!
.
.
.
.
.
.
Well?  Vas happened? I'm going to continue with the assumption that you committed the confirmation bias.  Hey, don't feel bad--we're hardwired for it.  Before we move forward and discuss how and why confirmation bias works, let me take you on a philosophical aside.

Aside on Falsificationism
I promised myself I wouldn't do this but it'd be helpful to bring in a little philosophy here.  Please meet my good friend Carl Popper (no relation to the inventor of the popular snack food known as Jalepeno Poppers).

Popper made a very important philosophical observation in regards to how we can test a hypothesis:  he said we cannot test for a hypothesis' truth, rather we can only test for its falsity.  This is called falsificationism.  In other words, there are infinitely many ways to show that a hypothesis is true, but it only requires one to show that it is false.  We should focus on looking to falsify rather than to confirm.

In technical philosophy we refer to an instance of a falsification as a counter example.  A counter example is a case in which all the premises are true but the conclusion is false (more on this later).

For illustrative purposes lets apply this principle to the number-pattern test from the link.  You were given a series of numbers and asked to identify the principle that describes the pattern.  Suppose, (unbeknownst to you) the ordering principle is any 3 numbers in ascending order.  How did you go about trying to discover the ordering principle? You looked at the numbers and like most people thought it was something to do with even numbers evenly spaced.  You looked at the sample pattern and tried to make patterns that conformed to your hypothesis.

For instance, if the initial pattern was 2, 4, 6 you might have thought, "ah ha! the pattern is successive even numbers!"  So, you tested your hypothesis with 8, 10, 12.  The "game" replied, yes, this matches the pattern.  Now you have confirmation of your hypothesis that the pattern is successive even numbers.  Next, you want to further confirm you hypothesis so you guess 20, 22, 24.  Further confirmation again!  Wow! You are definitely right!  Now, you plug you hypothesis (successive even numbers) into the game, but it says you are wrong.  What?  But I just had 2 instances where my hypothesis was confirmed?!

Back to Confirmation Bias
Here's the dealy-yo.  You can confirm your hypothesis until the cows come home.  That is, there are infinitely many ways to confirm the hypothesis.   However, as Popper noted, what you need to do is to ask questions that will falsify possible hypotheses.  So, instead of testing number patterns that confirm what you think the pattern is, you should test number sequences that would prove your hypothesis to be false.  That is, instead of plugging in more instances of successive even numbers you should see how the game responds to different types of sequences like 3, 4, 5 or 12, 4, 78.  If these are accepted too, then you know your (initial) hypothesis is false.

Lets look at this from the point of view of counter examples.  Is it possible that all our number strings {2, 4, 6}, {8, 10, 12}, {20, 22, 24} are true (i.e., conform to the actual principle--ascending order) but our conclusion is false (i.e., the ordering principle is sequential even numbers).  The answer is 'yes', so we have a counter-example.  In other words, it's possible for all the premises to be true (the number strings) yet for our conclusion to be false.  

How do we know our premises can be true and the conclusion false?  Because our selected number stings are also consistent with the actual ordering principle (3 numbers in ascending order).  If this is the case (and it is), all of the premises are true and our conclusion (our hypothesis) is false.  We have a counter example and should therefore reject (or in some cases further test) our hypothesis.

If you test sequences by trying to find counter examples you can eventually arrive at the correct ordering principle, but if you only test hypothesis that further confirm your existing hypothesis, you can never encounter the necessary evidence that leads you to reject it.  If you never reject your incorrect hypothesis, you'll never get to the right one! Ah!  It seems sooooooo simple when you have the answer!

Why do we care about all this as critical thinkers?
When most arguments are presented, they are presented with evidence.  However, (usually) the evidence that is presented is only confirming evidence.  But as we know from the number-pattern example, the evidence can support any number of hypothesis.  To identify the best hypothesis we need to try to disconfirm as many hypotheses as possible.  In other words, we need to look for evidence that can make our hypothesis false.  The hypothesis that stands up best to falsification attempts has the highest (provisional) likelihood of being true.

As critical thinkers, when we evaluate evidence, we should look to see not only if the arguer has made an effort to show why the evidence supports their hypothesis and not another, but also what attempt has been made to prove their own argument false.  We should also be aware of this confirmation bias in our own arguments.

Bonus Round:  Where do we often see confirmation bias?
Conspiracy theories and alt-med are rife with confirmation bias.  Evidence is only used that supports the hypothesis.  Alternative accounts of the results are not considered and there is often no attempt to falsify the pet hypothesis.

Confirmation Bias and the Scientific Method:
We'll discuss the scientific method in more detail later in the course but a couple of notes are relevant for now.  The scientific method endeavors to guard against confirmation bias (although, just as in any human enterprise, it sometimes creeps in).  There are specific procedures and protocols to minimize its effect.  Here are a few:
  • When a scientist (in a lab coat) publishes an article, it is made available to a community of peers for criticism.  (Peer review)
  • Double blinding
  • Control Group
  • Incentives for proving competing hypotheses and theories wrong (be famous!)
  • Use of statistical methods to evaluate correlation vs causation
Confirmation Bias 2:  Slanting by Omission and Distortion
Slanting by omission and distortion are 2 other species of confirmation bias.  Slanting by omission, as you might have guessed, is when important information is left out of an argument to create a favorable bias.

Perhaps a contemporary example can be found in the gun-rights debate.   We often hear something like "my right to bear arms is in the Constitution."  While this is true, the statement omits the first clause of the Second Amendment which qualifies the second, i.e., that the right to bear arms arises out of the historical need for national self-defense.  The Constitution is mute on the right to bear arms for personal security.  There also the troublesome word "well-regulated".

Omitting these fact slants the bias in favor of an argument for an unregulated right to bear arms based on personal self-defense.  This may or may not be a desirable right to have, but it is an open question as to whether this right is constitutionally grounded.

Another example of slanting by omission might be the popular portrayal by the media of terrorists in the US of being of foreign origins.  Such an argument omits many contemporary acts of domestic terrorism perpetrated by white American males (for example, Ted Kaczynski aka the unibomber and Timothy McVeigh).

Slanting by distortion is when opposing arguments/reasons/evidence are distorted in such as way as to make them seem weaker or less important than they actually are.  Think of slanting by distortion as something like white lies.

For example, famously, when Bill Clinton said "[he] did not have sexual relations with that woman," he was slanting by distortion in the way he deceptively used the term 'sexual relations'.

Summary
  • A common type of bias is confirmation bias in which only confirming evidence and reasons are cited, and falsifying evidence is ignored.  
  • A good way to test a hypothesis or argument is to ask whether it's possible for all the premises to true and the conclusion to be false; that is, are there counter examples.  Instead of emphasizing confirming evidence, a good argument also tries to show why counter examples fail.  In other words, it shows why, if all the premises are true we must also accept the particular conclusion rather than another one.  
  • As critical thinkers assessing other arguments, we should try to come up with counter examples.
  • Slanting by omission is when important information (relative to the conclusion) is left out of an argument.
  • Slanting by distortion is when opponents arguments/evidence are unfairly trivialized.












Wednesday, February 6, 2013

Critical Thinking: Biases, Vested Interests, and Conflicts of Interests


Introduction
The previous chapter on arguments focused on how differences in systems of beliefs give rise to arguments.  People with disparate systems of beliefs often hold differing values and beliefs, which in turn influence what they consider to be basic assumptions (to be used in an argument as premises).

It should also be mentioned that sometimes the difference isn't so much that the values are different in an absolute sense, but that they are held to different degrees.  For example, much social psychology has shown that conservatives favour attributing moral status and providing resources to "in group" members, while liberals often concern themselves more with "out groups" (than do conservatives).   This is not to say conservatives don't care about "out groups" or that liberals don't care about "in groups,"; instead, it is a matter of relative value.

For more information on the psychological differences between conservatives, liberals, and libertarians check out this great website: http://www.moralfoundations.org/index.php?t=home

So, why does this all matter to us as critical thinkers?  There are a host of reasons, but here are two important ones:  The first is that understanding the role of systems of belief in an argument can help  make us aware of biases in the premises (both our opponents argument and in our own).  The second is that understanding an opponent's bias can give us hints as to how we might sway the opponent to our own point of view.

Mommy?  What's a Bias?
A bias is an "inclination or prejudice for or against" some fact or point of view.  In arguments, what this means is that we are prone to giving undue favour or neglect to some fact or point of view.  Everybody does this (except me); it's part of being a human being.

There is a wealth of evidence in the psychological "litra-cha" demonstrating that we begin with our position first then collect or reject evidence and reasons to support that pre-existing position.  Our pre-existing position is usually grounded in emotion/preferences rather that "Reason."  The more emotional our investment in an issue, the greater the likelihood that some kind of bias has crept into our supporting arguments--in attributing either undue strength to a supporting assertion or in overlooking or dismissing  contrary reasons or evidence.

Biases:  Too Illegit to Quit?
We've established people (except me) have biases.  Now what?  Do we automatically rejet everybody's arguments 'cuz they're biased?  Nope.

We can make a distinction between legitimate and illegitimate biases.  The distinction will depend mostly on how opposing reasons, evidence, and arguments are portrayed, and if there are any intentional important omissions.  As you might have guessed an illegitimate bias is one in which the arguer poorly or dishonestly represents the aforementioned elements, or if the bias leads to weak logical connections between premises and the conclusion.  Any website or blog with a strong political bias in either direction will provide excellent samples of arguments with illegitimate biases.

legitimate bias is simply favoring a point of view but not in a way such that the opposing position is misrepresented.  It allows an impartial observer to fairly evaluate the proposed point of view.  For example, I think everyone should be allowed to own an assault riffle bow that fires swords for self-defense.   That's my point of view.


My argument is that they are not prohibited by the Constitution, therefore, they should be legal.  My opponents reply that the 2nd Amendment isn't about arms for personal self-defense but for a well-regulated militia that should be controlled by the Gov't.  They'd also argue that just because a small group of people a few hundred years ago voted on something, doesn't mean that we need to accept it now.  Societies and circumstances change, and the best laws reflect that.  Notice that even though I'm biased toward people owning assault rifle bows that fire swords, I don't distort the opposing arguments.

Vested Interests
A vested interest is when an arguer (or someone paying the arguer) stands to benefit from their point of view being accepted.  When vested interests are involved there's a very high likelihood of illegitimate bias.

For example, when certain industries spend millions of dollars to pay lobbyists and "donate" to politicians, we can be fairly certain that their arguments for special treatment or exemption contain illegitimate biases.

Not all vested interests need be financial.  One might be motivated by the desire for power, fame, revenge, attention, sex, etc.. or to get out of trouble/prove one's innocence.

We should be cautious of dismissing arguments out of hand just because the arguer has a vested interest in the outcome.  That they have a vested interest tells us nothing about the argument's validity which should be evaluated independently   When there is a vested interest, it simply means we should be extra cautious about illegitimate biases (and omissions).

Conflict of Interest
A conflict of interest is a vested interest on steroids; i.e., when vested interests are extreme.  In such cases there is usually an ethical issue involved too, and in professional settings, conflicts of interest have to be disclosed.

For example, in medical research if a university study of a drug is funded by the company that produces the drug, this is a conflict of interests for the researchers.  It must be disclosed at the beginning of any research that is produced.

An important recent example of a conflict of interest in medicine that wasn't disclosed was Andrew Wakefield's anti-vaccine research article in the Lancet.  What he did not disclose in his research was that he had been paid several millions of dollars to do research on vaccines by a company that was developing an alternative to the conventional vaccine.    

There was a clear conflict of interest because he stood to gain so much if his research showed that conventional vaccines are unsafe and that the company that had funded the research was developing an alternative.

In the end, his results were never replicated, his methods shown to be unethical, his data drawn from a statistically insignificant sample size (12 children), and the article was subsequently retracted by the publisher.  However,  because of the fear that came about because of his "research," there was and continues to be tremendous damage to public health.

Summary:  
We all have biases.  What matters is the degree to which they distort the presentation of evidence and reasons in arguments both for and against the arguers position.  Biases are illegitimate when they cause distortion such that arguments cannot be fairly evaluated..