11th April 2014 at 2:21 am #1078
I’ve been using Field’s “Discovering Stats with SPSS” (3rd ed) and came across a potentially interesting solution to a problem I’m having. I’m wondering if I am reading his comments accurately.
I ran a Friedman test on a small (n=7) sample size wherein statistical significance was found. I then ran a post hoc analysis using Wilcoxon signed-rank tests. I applied the Bonferroni correction, but the analyses were unable to detect any significance. In relation to this (at the bottom of page 577 of the Field’s text) I came across this in relation to a similar context: “Remember that we are now using a critical value of .0167, and in fact none of the comparisons are significant because they have one-tailed significance values of .500, .423 and .461 (this isn’t surprising because the main analysis was non-significant).”
My question. Why is Field talking about one-tailed significance rather than two-tailed? Should I be looking at one-tailed results of my Wilcoxon signed-rank test (because if I do, then there is indeed statistical significance)? Should I actually be looking at the one-tailed output in this situation? Many thanks in advance.
Gary11th April 2014 at 4:00 am #1085
two tailed is used when you have two gropus of population. I can undersatnd that you have one group only.If you have two groups or two major set of factors then two tailed test will be good.11th April 2014 at 9:09 pm #1084Stephen GorardParticipant
No – don’t use significance at all. It does not provide the kind of answer you want – just an illusion. With only 7 cases anyway simply report your findings.12th April 2014 at 2:59 am #1083
Interesting read Stephen. Thanks so much for getting that out to me. You certainly have me thinking!
Do you know of any similar commentary in this regard that is published? If I’m going to go against the grain I’m going to need to cite something.12th April 2014 at 5:00 am #1082
I was also of the similar view as why use such big analysis for just 7. It is just good describe the data.I was thinking may be you have too many variables. Just for small sample small data it is good to descriptive interpretation.Any way good luck12th April 2014 at 9:30 am #1081Stephen GorardParticipant
Yes. All of that has been known since sig tests were first proposed. Just a few examples from thousands who have not been taken in by this strange ‘religion’ over the last 100 years or so:
Berkson, J. (1942) Tests of significance considered as evidence, Journal of the American Statistical Association, 37, 325-335
Carver, R. (1978) The case against statistical significance testing, Harvard Educational Review, 48, 378-399
Falk, R. and Greenbaum. C. (1995) Significance tests die hard: the amazing persistence of a probabilistic misconception, Theory and Psychology, 5, 75-98
Gill, J. (1999) The insignificance of null hypothesis significance testing, Political Research Quarterly, 52, 3, 647-674
Jeffreys, H. (1937) Theory of probability, Oxford: Oxford University Press
Lipsey, M., Puzio, K., Yun, C., Herbert, M., Steinka-Fry, K., Cole, M., Roberts, M., Anthony, K. and Busick, M. (2012) Translating the statistical representation of the effects of educational interventions into more readily interpretable forms, US Department of Education: NCSER 2013-3000
Nickerson, R. (2000) Null hypothesis significance testing: a review of an old and continuing controversy, Psychological Methods, 5, 2, 241-301
Rozeboom, W. (1960) The fallacy of the null hypothesis significance test, Psychological Bulletin, 57, 416-428
See also attached about the vital importance of retaining judgement about results – whatever kind of data is involved.12th April 2014 at 11:53 am #1080
Thanks kindly. I really appreciate you having taken the time on this best. Cheers! Gary12th April 2014 at 11:53 am #1079
Thanks for the advice – much appreciated. Best, Gary
- You must be logged in to reply to this topic.