What happens now, is that any hack with an internet connection can post an article about a "study" purporting to support whatever ideology that particular website has. This is a problem for several reasons but I will focus on two right here:
(1) Very often the article and (especially) the headline do not support the data or the conclusions of the actual study.
(2) The results of the study are not reported in the context of the background literature on the topic.
Lets take a look an example to illustrate what I'm talking about...
Consider this article that breathlessly announces that science "proves" (a Harvard study no less!) that Faith has a positive effect on healing. Lets take a look at the claims in the article, then contrast them with what's in the abstract of the actual study (I don't feel like paying $35.00 for full access).
Basic overview of the study: The study looked at the effects of (teaching?) positive religious coping beliefs (it's all part of God's plan) to psychiatric patients, some of whom, pre-treatment, had negative religious coping beliefs (God's punishing you, the Devil's doing it). The upshot is that getting these people to have positive coping beliefs leads to better outcomes.
Aside: Am I the only one that finds it ironic that the study is about trying to treat people who often suffer from mental delusions with more mental delusions?
Headline: Faith and healing: Religious coping improves outcomes for people being treated for severe psychiatric illness.
Ok, so based on this headline I'd believe that a Harvard study shows that religious coping mechanisms improve outcomes for severe psychiatric illness. Wow! As if fear of God smiting me wasn't enough incentive to believe!
The first paragraph then goes on to restate the study's "findings", plus give them some cred because it was a study by a researcher affiliated with Harvard Medical School.
Ok, so we need to ask a couple of basic questions about (a) what was actually in the study compared to how it was reported and (b) what basic weaknesses the actual study might have.
Reporting Vs The Study
First of all, the way the headline of the article reads, it seems like its an open and shut case that religious coping improves outcomes. Lets take a look at the first line of the actual study:
Religious coping is very common among individuals with psychosis, however its relevance to symptoms and treatment outcomes remains unclear.
Well, that doesn't seem to be the same degree of certainty as the article headline suggests, does it? However, in all fairness, this might not be the right interpretation of what the author's intend. They probably mean that since outcomes have been unclear, we decided to do a study...Lets let that slide.
Now lets look at the final results of the study (as reported in the abstract):
Negative religious coping appears to be a risk factor for suicidality and affective symptoms among psychotic patients. Positive religious coping is an important resource to this population, and its utilization appears to be associated with better treatment outcomes.
Well, based on this information, appealing to faith and religious coping--unqualified--might not be a good idea for all groups, but this is certainly not what we'd assume from the article headline. What seems to be good is a particular type of religious belief.
At this point, we might want to ask what the relative effect of negative and positive religious beliefs might be on mental health. If negative-coping religious beliefs only have a small effect while positive-coping religious beliefs have a large effect, we could plausibly argue that, on balance, religious beliefs (unqualified) and faith might be a good thing for mental health. Lets take a look at the numbers, shall we?
According to the study the negative-coping religious beliefs accounted for 9-46.2% of the negative effect on pre-treatment conditions while positive-coping religious beliefs accounted for 13.7-36% of the positive outcomes.
So, again, it looks like, contra the headlines, not any old religious beliefs will do. Consider that the article (and the study) claim that religious belief (unqualified) has a positive effect on mental health. However, their own data shows that 9-46.2% of a subjects psychological problems can be attributed to religious belief. Furthermore,
Negative religious coping appears to be a risk factor for suicidality and affective symptoms among psychotic patients.
Purportedly, the good effects only come from a particular kind of religious belief. I sure hope they're the one's sanctioned by God and not the ones that are merely convenient for us!
I'll come back to these numbers when we look at the study.
Up until now we've been throwing around the term "religious beliefs" and "faith" as though it only comes in two flavors and degrees: positive and negative. What we should check is to see how this quality was assigned. As it turns out,
At this point, with all the category shenanigans, we might want to turn a critical eye to the study itself.
A. One thing we might consider is the relative effect sizes of negative and positive-coping beliefs and how subjects might be clustered within these ranges. For example, the study says that the negative impact of negative-coping religious beliefs is between 9 and 46.2%. We have no way of knowing where in this range the subjects are distributed. Are 90% of them clustered near 46% and 10% near the 9 %? Without paying the $35.00, we don't know. Nevertheless, this is important.
Suppose it is as I hypothesize: Most negative-copers are near the 46% mark. Now suppose also that when subjects receive the beneficial effects (range 13-36%) of positive coping beliefs, they are clustered near the 13% end of the spectrum, with a few outliers in the high range. If this is the case, then the net effect of religious belief is still (significantly) negative. (A possible reply is that, yes, but with the positive beliefs they are still moderately better off than they would have been otherwise: True. I'll address this in section D below).
B. One of the first things you should look at when evaluating a study, because it take very little math, is the sample size. The sample size here is 47. But the study isn't making conclusions about a sample of 47 people; the conclusions are about sub-groups which made up of those 47 people. Let do some 'rithmatic:
Group 1: 8% of 47=3.76 people (WTF?) are "very religious"
Group 2: 20% of 47=9.4 people (WTF?) are "religious"
Group 3: 85% of 47=39.95 people (WTF?) are "use spirituality in some way as a coping
(I think we can safely assume that Group 3 is comprised of Group 1 and 2 plus 26 people.)
Group 4 (?): 7 people who didn't have any spiritual coping mechanism.
Where to begin? Here are a few problems to consider:
C. The study's conclusions are about the effects of negative and positive-coping religious beliefs, but as the results are reported, the outcomes aren't divided up into 4 (?) subgroups. This creates some problems (unless we get behind the pay wall). For example, it could be that all the "very religious" people (3.7 people) are the the ones who experienced the greatest positive effect, in which case, we're drawing conclusions based on a sample of 3 or 4 people. I shouldn't have to explain why this is a problem. There are a boat-load of similar problems depending on how the different categories overlap.
D. Probably the biggest problem is where dafuk is the control? Results are fairly meaningless without one. The entire group of subjects with any kind of uber-broadly-defined religious coping beliefs (i.e., spiritual) was 40 out of 47 people. So the control/non-religious group was 7? Or were these 7 also receiving "treatment" with positive coping religious beliefs? Any well designed study has a control that is roughly equal to the treatment group.
How can we attribute effect if we aren't comparing to anything? The subjects were all getting treatment. What they need is a group that gets the same treatment minus the positive religious coping beliefs, then we might be able to infer that it's the positive coping beliefs that are doing the work.
E. How do we know that it is the fact that the coping beliefs are of a religious nature that is the reason for the outcome? Maybe you could bestow similar coping devices with secular trappings. We'd need to control for this...oops!
F. This brings me to my final analytical point. You cannot evaluate studies in isolation. Every study needs to be evaluated within the context of basic scientific knowledge as well as the research literature of which it is a part. If an individual study shows results that conflict with either of these to contexts, you should be very skeptical.
It doesn't mean the study is wrong, it only means that the burden of proof is higher for that particular study and we should probably wait for replication (with good controls, large sample sizes, and rigid methodology) before we think magic treatment x cures all our ills without side-effects. If one study purports to overturn basic science and well-established literature, the burden should be high. That's how science works.
Also, you cannot give each study equal weight. You have to look at the relative quality of the study. A study with sound methodology and controls should be weighted more heavily than one that has obvious flaws. Not all studies are created equal.
As for this particular study, I'm not too familiar with the literature and am too lazy to do a search right now. I know that I have seen well-designed studies as well as meta-analyses that show no effect or negative effect for intercessory prayer (not the same issue but related). I've also seen well designed studies showing that some popular "positive thinking" techniques can actually have a negative effect. I've also seen studies to the contrary. I'm not familiar enough with the evidence to have a position.
My main point regarding this study is that, to properly evaluate it, I'd have to get behind the pay wall. However, one thing is certain, the headline of the article and the contents of the study don't match up too well.
Are You Scurd? With Ami's Handy-Dandy Stress-Free Quantum Critical Thinking Course, You Shouldn't Be
What's scary about this? First of all, the website it was posted on looks pretty "scienc-y". There are, however, a few red flags. The first is ads. This in and of itself doesn't delegitimize the source but it tells us that the purpose of the site might be to get eyeballs rather than be a careful curator of legitimate science literature. (Of course the two are not necessarily mutually exclusive).
This can be a problem because the site may have a tendency to either (a) present studies in such a way that isn't representative of their actual content in order to draw eyeballs, (b) might have a bias towards reporting studies that are only preliminary and poorly constructed (that's why the effect is surprising).
On the website, there are lots and lots of links to articles on the latest studies. In fact, it wouldn't surprise me if a portion of the studies it has articles on are good studies. However, I find it hard to believe that with well over 30 linked articles a day that there is much vetting going on.
But that's not the issue. The issue is how the studies are being reported, and that they are being reported in a way that is not always representative of the data in the actual study--be it a good or bad study. This is why arming yourself with critical thinking skills is so important.
At the end of the day, if you want to know if an article about a study is properly representing the study's findings you have to find the original study...and then you need to look at the abstract...and then you need to get behind the got tam pay wall if they have one.
For a limited time I'm willing to teach you the amazing secret to critical thinking the "establishment" doesn't want you to know. Please send your credit card numbers to my email. Thank you.