Last year, an aunt was diagnosed with extremely low bone density. She had been complaining of back pain and weakness, and a few tests later, her orthopedic confirmed that bone density was the problem. She was put on a course of medication, and then was given by shots. A year later, she got her bone density tested again, and found that there was not much improvement.
She did a few rounds of the doctors again – orthopedics, endocrinologists and the like, and the first few were puzzled that the medication and the shots had had no effect. One of the doctors, though, saw something others didn’t – “there is no marked improvement, for sure”, he remarked, “but there is definitely some improvement”.
Let us say you take ten thousand observations in “state A”, and another ten thousand in “state B”. The average of your observations in state A is 100, and the standard deviation is 10. The average of your observations in state B is 101, and the standard deviation is 10. Is there a significant difference between the observations in the two states?
Statistically speaking, there most definitely is (with 10000 samples, the “standard error” given a standard deviation of 10 is 0.1 (10 / sqrt(10000) ), and the two sets of observations are ten standard errors apart which means that the difference between them is “statistically significant” to a high degree of significance. The question, however, is if the difference is actually “significant” (in the non-statistical sense of the word).
Think about it from the context of drug testing. Let us say that we are testing a drug for increasing bone density among people with low bone density (like my aunt). Let’s say we catch 10000 mice and measure their bone densities. Let’s say the average is 100, with a standard deviation of 10.
Now, let us inject our drug (in the appropriate dosage – scaled down from man to mouse) on our mice, and after they’ve undergone the requisite treatment, measure the bone densities again. Let’s say that the average is now 101, with a standard deviation of 10. Based on this test, can we conclude that our drug is effective for improving bone density?
What cannot be denied is that one course of medication among mice produces results that are statistically significant – there is an increase in bone density among mice that cannot be explained by randomness alone. From this perspective, the drug is undoubtedly effective – that there is a positive effect from taking the drug is extremely highly likely.
However, does this mean that we use this drug for treating low bone density? Despite the statistical significance, the answer to this is not very clear. Let us for a moment assume that there are no competitors – there is no other known drug which can increase a patient’s bone density by a statistically significant amount. So the choice is this – we either not use any drug, leading to no improvement in the patient (let us assume that another experiment has shown that in the absence of drugging, there is no change in bone density) or we use this drug, which produces a small but statistically significant improvement. What do we do?
The question we need to answer here is whether the magnitude of improvement on account of taking this drug is worth the cost (monetary cost, possible side effects, etc.) of taking the drug. Do we want to put the patient through the trouble of taking the medication when we know that the difference it will make, though statistically significant, is marginal? It is a fuzzy question, and doesn’t necessarily have a clear answer.
In summary, the basic point is that a statistically significant improvement does not mean that the difference is significant in terms of magnitude. With samples large enough, even small changes can be statistically significant, and we need to be cognizant of that.
No mice were harmed in the course of writing this blog post