Correlation and causation, not the same. You have probably heard this before. It’s one of those basic statements that bores students because it seems like common sense. The internet is full of pithy examples. The reason the idea is so lecture inducing is that it fights against a natural human tendency to see patterns and attribute causes. In an effort to discipline this tendency, people like statistician Larry Wasserman formalize the idea that an observed association between X and Y does not alone support the inference that one causes the other. Wasserman sets out the logic of causation in a useful post. Wasserman then articulates the central challenge in efforts to link this type of thinking to empirical research. He cites a fascinating finding reported by Young and Karr (2011), who identified 12 articles and a total of 52 interventions comparing observed and experimental evidence, meaning that a correlation was observed and then subjected to a randomized experimental design. They find 0 of 52 claims based on observational studies are borne out when subjected to randomized trial. That’s right, zero percent. The authors report that five findings were actually significant, but in the contrary direction.
The articles reviewed by Young and Karr (2011) were published in Journal of the American Medical Association, New England Journal of Medicine, and Journal of the National Cancer Institute between 1994 and 2009. These are large, sophisticated studies. The result is honestly shocking, but even if it’s true the authors’ tone, concluding for example with the statement, “This should not be allowed to continue,” begs the question: What shouldn’t continue? As a dedicated consumer and producer of empirical research rooted in the world as it exists, developing useful experimental interventions strikes me as an important but ultimately secondary goal of scholarly research for precisely the reasons the study seems to suggest. Also, from the Economist blog Graphic Detail, this discussion is interesting.