by

It’s bad in medicine, too.

Social science is raked over the coals so often for it’s misuse and misunderstanding of basic statistics that it’s easy to forget that the problem is just as bad in other fields. I came across this bit in StatPearls — a free online assortment of tutorials and quizzes for medical students and professionals — on the interpretation of significance testing:

In research, statistical significance is a measure of the probability of the null hypothesis being true compared to the acceptable level of uncertainty regarding the true answer […] The p-value is best described as the probability that the null hypothesis is true given the researcher’s current set of data […] What does this mean? The p-value of 0.02 implies that there is a 2% chance of the null hypothesis being correct, true, or explained by the current set of data […] Based on the alpha of 0.05, the researcher is only 95% sure they are correct in their conclusion.
StatPearls

It’s no wonder that the medical literature is so replete with bad study design and bad statistics if this is how medical students are being taught (see Goodman, 1999 for a still-timely discussion). Even the World Health Organization makes these kind of stupid mistakes, like this comment from A practical guide for health researchers (p. 97, Fathalla and Fathalla, 2004)

A statistical significance test, based on a P value will tell us about how likely this difference is to be real, or to be a chance finding.

Surely they can afford a statistician?

References

Fathalla, M. F., & Fathalla, M. M. (2004). A practical guide for health researchers. World Health Organization, Regional Office for the Eastern Mediterranean.

Goodman, S. N. (1999). Toward evidence-based medical statistics. 1: The P value fallacy. Annals of internal medicine, 130(12), 995-1004.