There have been a few news stories recently about misunderstanding of statistics; especially confusing correlation with cause. For example, the Guardian ran this story/tutorial (http://t.co/AwCugVIV). Hot on the heels of that useful reminder, came a story linking unemployment in the UK to immigration from outside of the EU (http://t.co/7rTcFvbH). By the time the Daily Mail had got their hands on the story the ‘link’ had turned into a ‘cause’ (http://tinyurl.com/6s2lujq) and, according to one of their columnists was irrefutable proof that all immigrants should be shot (http://tinyurl.com/85xdfqy). Well, OK, he didn’t exactly say that but if you read between the lines I’m sure that’s what he meant. The trouble is not just that journalists and editors of newspapers (even the ones that carefully manufacture self-images of being more intellectual than other newspapers) know arse all about statistical theory, but that the 4.5 million or so readers of the daily mail (and others) also know arse all about statistics. Even my parents, who read the daily mail, know arse all about statistics and I’ve given them numerous copies of my books … that could be an association, but it’s certainly not a cause.
Anyway, all of this reminded me that when I’m trying to convince my students that statistics is a good thing to learn about, my main point is that it is a transferable skill that helps you to navigate the tricky terrain of life. After 3 years of a psychology degree (or any other degree that teaches applied statistics), you’re in the rather privileged position of being able to evaluate evidence for yourself. You don’t have to worry about whether the newspaper, or your GP, tells you not to vaccinate your child because the injection will grow them a second head, you can track down the research and evaluate the evidence for yourself.
To quote Utts, “What good is it to know how to carry out a t-test if a student can not read a newspaper article and determine that hypothesis testing has been misused?” 1, p. 78. Utts 1 suggests seven core statistical ideas that could be described as ‘useful life skills’, which I summarized as 2:
(1) When causal relationships can and cannot be inferred, including the difference between observational studies and randomized experiments;
(2) The difference between statistical significance and practical importance, especially when using large sample sizes;
(3) The difference between finding ‘no effect’ and finding no statistically significant effect, especially when sample sizes are small;
(4) Sources of bias in surveys and experiments, such as poor wording of questions, volunteer response, and socially desirable answers;
(5) The idea that coincidences and seemingly very improbable events are not uncommon because there are so many possibilities (to use a classic example, although most people would consider it an unbelievable coincidence/unlikely event to find two people in a group of 30 that share the same birthday, the probability is actually .7, which is fairly high);
(6) ‘Confusion of the inverse’ in which a conditional probability in one direction is confused with the conditional probability in the other direction (for example, the prosecutor’s fallacy) ;
(7) Understanding that variability is natural, and that ‘normal’ is not the same as ‘average’ (for example, the average male height in the UK is 175cm; although a man of 190cm is, therefore, well above average, his height is within the normal range of male heights).
In a book chapter I wrote on teaching statistics in higher education2, I suggest that we should try, if nothing else, to get students to leave their degree programs with these core skills. We could also think about using real world examples (not necessarily from within our own discipline) to teach students how to apply these skills. This could have several benefits: (1) it might make the class more interesting; (2) it helps students to apply knowledge beyond the realm of their major subject; and (3) it will undermine the power that newspapers and the media in general has to sensationalize research findings, spread misinformation, and encourage lazy thinking. So, my main point is that, as teachers, we could think about these things when teaching, and students might take comfort in the fact that the stats classes they endured might have given them a useful shield to fend off the haddock of misinformation with which the media slaps their faces every day.
Right, I’m off to restructured my statistics course around those 7 key ideas ….
1. Utts J. What Educated Citizens Should Know About Statistics and Probability The American Statistician 2003;57(2):74-79
2. Field AP. Teaching Statistics. In: Upton D, Trapp A, editors. Teaching Psychology in Higher Education. Chichester, UK:: Wiley-Blackwell., 2010.