# Friedman ANOVA significant but post hoc tests not -how to conclude

Home Forums Methodspace discussion Friedman ANOVA significant but post hoc tests not -how to conclude

Viewing 5 posts - 1 through 5 (of 5 total)
• Author
Posts
• #1350
Nina Walker
Member

Hi there,

I hope you can help I would be very grateful!  I have completed a repeated measures experiment based on the quality of life scores of six individuals before and after a move (IV =three time points) DV = quality of life score.

My first question is that although I tested the data and it came out as normally distrubted I have decided to use non-parametric tests because of the small sample sizes is this right?

Secondly when I ran Friedman’s ANOVA there was found to be a significant difference (=0.19) however when I ran Wilcoxin tests none of the three pairings were found to be significant once I had used a Bonferroni corrected significance level of <0.17.  (The smallest result was 0.28).

I am confused about what I can conclude from this as if there is no significant difference between the pairings does that mean I can not claim there is a significant change in scores overall?

I calculated the effect size and all three came out as larger than 0.3 so can I use this to conclude there has been a change overall?

I have just read Andy Fields book and it made everything much clearer but didn’t cover what to conclude if the post hoc tests are not significant and I have tried looking through forums but struggled.

Any help would be greatly appreciated,

Many thanks

N

#1354
Dave Collingridge
Participant

Greetings Nina,

You should try running a regular ANOVA and see if the results are different. Parametric tests usually have more power than non-parametric tests like Friedman.

For the Friedman’s ANOVA to be significant the p-value should be less than or equal to 0.05. Was your p-value = 0.19 or 0.019? The first is not significant, the second is significant. If your overall test is significant but your post hoc tests are not then it may be due to small sample size and low power. In this case a Bonferroni correction may be too conservative when comparing time periods. In circumstances such as this you can justify not using a Bonferroni correction because it may cause you to miss out on potential differences. Comparing groups without a correction is sometimes called Least Significant Difference (LSD). If you take this route then you should probably state not using a correction due to low sample size as a study limitation. Also state that a follow up study with a larger sample size and Bonferroni correction is needed to confirm your results. Of course comparing time means without a correction is only a good idea if it results in significant differences that you expected because of a significant overall test.

#1353
Nina Walker
Member

Thanks very much for your response!  The value was 0.019 apologies typing error. That is helpful -do you think that I should report effect sizes as well or are they not relevant in this instance?

Thanks again for your quick response!

#1352
Dave Collingridge
Participant

Yes, I think effect sizes would be useful because they convey the practical difference between time periods which may be meaningful, even though the differences are not statistically significant due to low power.

#1351
Nina Walker
Member