Re: Bonferroni correction applied to Wilcoxon signed-rank test

Home Forums Methodspace discussion Re: Bonferroni correction applied to Wilcoxon signed-rank test

Viewing 8 posts - 1 through 8 (of 8 total)
  • Author
  • #1078
    Gary Fogal


    I’ve been using Field’s “Discovering Stats with SPSS” (3rd ed) and came across a potentially interesting solution to a problem I’m having. I’m wondering if I am reading his comments accurately.

    I ran a Friedman test on a small (n=7) sample size wherein statistical significance was found. I then ran a post hoc analysis using Wilcoxon signed-rank tests. I applied the Bonferroni correction, but the analyses were unable to detect any significance. In relation to this (at the bottom of page 577 of the Field’s text) I came across this in relation to a similar context: “Remember that we are now using a critical value of .0167, and in fact none of the comparisons are significant because they have one-tailed significance values of .500, .423 and .461 (this isn’t surprising because the main analysis was non-significant).”

    My question. Why is Field talking about one-tailed significance rather than two-tailed? Should I be looking at one-tailed results of my Wilcoxon signed-rank test (because if I do, then there is indeed statistical significance)? Should I actually be looking at the one-tailed output in this situation? Many thanks in advance.



    two tailed is used when you have two gropus of population. I can undersatnd that you have one group only.If you have two groups or two major set of factors then two tailed test will be good.

    Stephen Gorard

    No – don’t use significance at all. It does not provide the kind of answer you want – just an illusion. With only 7 cases anyway simply report your findings. 

    Gary Fogal

    Interesting read Stephen. Thanks so much for getting that out to me. You certainly have me thinking!

    Do you know of any similar commentary in this regard that is published? If I’m going to go against the grain I’m going to need to cite something. 


    I was also of the similar view as why use such big analysis for just 7. It is just good describe the data.I was thinking may be you have too many variables. Just for small sample small data it is good to descriptive interpretation.Any way good luck

    Stephen Gorard

    Yes. All of that has been known since sig tests were first proposed. Just a few examples from thousands who have not been taken in by this strange ‘religion’ over the last 100 years or so:

    Berkson, J. (1942) Tests of significance considered as evidence, Journal of the American Statistical Association, 37, 325-335

    Carver, R. (1978) The case against statistical significance testing, Harvard Educational Review, 48, 378-399

    Falk, R. and Greenbaum. C. (1995) Significance tests die hard: the amazing persistence of a probabilistic misconception, Theory and Psychology, 5, 75-98

    Gill, J. (1999) The insignificance of null hypothesis significance testing, Political Research Quarterly, 52, 3, 647-674

    Jeffreys, H. (1937) Theory of probability, Oxford: Oxford University Press

    Lipsey, M., Puzio, K., Yun, C., Herbert, M., Steinka-Fry, K., Cole, M., Roberts, M., Anthony, K. and Busick, M. (2012) Translating the statistical representation of the effects of educational interventions into more readily interpretable forms, US Department of Education: NCSER 2013-3000

    Nickerson, R. (2000) Null hypothesis significance testing: a review of an old and continuing controversy, Psychological Methods, 5, 2, 241-301

    Rozeboom, W. (1960) The fallacy of the null hypothesis significance test, Psychological Bulletin, 57, 416-428

    See also attached about the vital importance of retaining judgement about results – whatever kind of data is involved. 

    Gary Fogal

    Thanks kindly. I really appreciate you having taken the time on this best. Cheers! Gary

    Gary Fogal

    Thanks for the advice – much appreciated. Best, Gary

Viewing 8 posts - 1 through 8 (of 8 total)
  • You must be logged in to reply to this topic.