questions with mixed design ANOVA:Discovering Statistics using R

Home Forums Default Forum questions with mixed design ANOVA:Discovering Statistics using R

Viewing 2 posts - 1 through 2 (of 2 total)
  • Author
  • #2281
    Julie Hsu

    Dear all,

    I am doing some data analyses and my data have a between (4 levels) by within (5 levels) structure. Becuase mmy data do not follow a normal distribution and there are several violations of equal variance as well, so I am using the robust test (bootstrap) described in Chapter 14. The main functions I was using are sppba, sppbb, and sppbi, as described in Chapter 14.I was abel to get to this point fine. However, I am not sure how to interprete the results. For example, when I run “sppba(4, 5, Data, est = mom, nboot = 1000), the outputs are:

    [1] “Taking bootstrap samples. Please wait.”
    [1] 0

    [1] -200.30387  -89.52152 -361.37309  110.78235 -161.06922 -271.85157

         [,1] [,2] [,3] [,4] [,5] [,6]
    [1,]    1    1    1    0    0    0
    [2,]   -1    0    0    1    1    0
    [3,]    0   -1    0   -1    0    1
    [4,]    0    0   -1    0   -1   -1

    Two questions I have are: 1) what does the p=0 mean? 2) The output contains test statistics ($psihat) for each pairwise comparisons (e.g., -200.30387 for 1 vs 2). How do I know if each pairwise comparison is significant or not? Can I get a overall test statistics for Factor A?

    I would be gratiful if anyone with more experience on this could share with me your comments on this or point me to some references.

    Many thanks,



    hello there. there are a few things i would like to mention concerning this post.

    ok, for starters i dont have Dr Field’s book (although i do use R quite regularly) so i’m not sure if this is just repeating what he says there or not. apparently, those functions belong to a ‘WRS’ package which Dr Wilcox uses to accompany his book ‘Introduction to Robust Estimation and Hypothesis Testing’. one thing that i do find a little bit iffy here is that the pacakge is purposefully not available in the CRAN and needs to be explicitly requested. he also didnt provide a user manual for his package (as most people do), which means that unless one owns his book it is very difficult to actually make sense of what he’s doing and which output means what.

    but anyways, p=0 should mean that the empirical p-value after all the bootstap re-samples is very, very small, which is good. now, i might be wrong on this one but i believe in an excrept for the book Dr Wilcox mentions that the result stored in the $p.value list object (i’m assuming it’s a list) is the overall p-value for the model, something like your ANOVA F test. psihat is quite likley the empirical cut-off from the Studentized Range Distribution (or some variation of it) which you can probably check on an experimental design textbook so you can check the p-values of each contrast. and i’m not sure of what you mean by ‘Factor A’ so i’ll have to leave it at that.

    i’m not sure if my answer is particularly helpful or not because without Dr Wilcox’s book it’s very hard to know what each function is doing and how to input the code needed to get what you want. it’s a popular package so you probably only need to add an argument or something very easy somewhere there.

    the good thing about R is that it’s incredibly flexible so if one way of doing things doesnt work, there’s always a way around. may i recommend using the ‘lmPerm’ package that conducts exact permutation tests for linear models? in my very own experience and monte carlo simulation results, i think i like permutation tests better than bootstrap tests in case of robust estimation, although both usually give you pretty much the same answer. the good thing of lmPerm is that at least the documentation manual is free and you can always read on it to understand what you’re doing without needing to buy an extra book.

    hope it helps!

Viewing 2 posts - 1 through 2 (of 2 total)
  • The forum ‘Default Forum’ is closed to new topics and replies.