Home › Forums › Methodspace discussion › Which N to use when converting standard error to SD
 This topic has 9 replies, 2 voices, and was last updated 4 years, 10 months ago by Stephen Gorard.

AuthorPosts

16th February 2016 at 5:39 am #471Sarah WilliamsMember
When converting standard error to standard deviation (using the formula SD = SE x SRT of N), I am having trouble with is which N to use – the N of the experimental group (for example) or the number of participants in the overall study.
An ANOVA was conducted and I don’t have the data but I do have the standard errors and Ns of each group.
Experimental group N = 12
Control group N = 11
I would like to say:
The main effect of Group was significant, F(X, X)=X, p<0.001 (ƞ_{p}²=X), with higher scores reported for the exerimental (M=X, SE = 3.01, SD = ??? ) compared with the control (M=X, SE = 3.18 SD = ???) group.
I would then like to report on another main effect (the main effect of stage), and convert those SEs into SDs.
Would I be correct in using the experimental group’s N (12) when converting the first SD mentioned from the SE, the control group’s N (11) when making the second conversion, and the N of participants overall (23) when reporting SDs for the main effect of stage?
Any help on this would be greatly appreciated 🙂 Thanks so much
19th February 2016 at 9:53 am #480Stephen GorardParticipantYes. The N would apply to the size of the group for whom the SD is calculated.
That is quite a basic question – not a problem since it is better to ask. But it makes me wonder whether you understand all of the rest of the gobbledegook above. If you really have ‘significant’ results with such small cell sizes, then the difference in means must be enormous. Isn’t better to simply cite the difference in means – and let the reader see. What does the ANOVA tell you that you actually want to know?
20th February 2016 at 11:32 am #479Sarah WilliamsMemberThanks for your comment.
The ANOVA shows the reader that the difference in means implies that there is a ‘real’ difference between the groups rather than just a nonsignificant, random difference caused by the fact that when you draw two samples from the same population, theres bound to be some difference between the means and that doesn’t necessarily mean they are different populations.
If I just presented the means, or the difference between them, there’s no way to know that this difference reflects a real difference between the groups, or just random sampling ‘error’.
The question is basic, yes, but I always feel it’s better to check to make sure my calculations are correct. Of course, it seems intuitive that you would use the particular sample’s N (in the case of Group – the ‘sample’ within the sample), but because I checked to make sure this was correct on the internet, and could really not find anything at all on the fact that you would use a smaller N than the entire sample, I thought this was a bit strange and wanted to make sure I was using the formula correctly.
20th February 2016 at 1:59 pm #478Stephen GorardParticipantAs I suspected, you do not understand what ANOVA can and cannot do for you.
First query – is your sample randomly selected/allocated with full response? If not, ANOVA could not be used anyway (whatever it did).
If you have a complete random sample, ANOVA can help you estimate that chance that the difference between means is caused only by sampling variation assuming that there is no difference in the popn. This is very different to what you describe. It is not clear by what logic the probability that ANOVA estimates could be used to assess the probability that there is actually a difference in the popn. The two probabilities are obviously not the same. To portray the latter as the former would be a serious error.
21st February 2016 at 7:57 am #477Sarah WilliamsMember
The sample is random to a degree (mainly university students). It’s not possible to randomly allocate participants to a highspiderfear and lowspiderfear group, as these fears were present before the study, nevertheless I have been instructed to report this ANOVA by my supervisor.
Please correct me if I’m wrong as I am keen to learn. So if the ANOVA suggests that the difference in means is in fact not caused by random sampling variation, starting off with the assumption that there is no difference in the population, would that not suggest that the difference is due to the fact that the samples are drawn from two different populations? (After attempting to control for systematic error)
What would be the point of merely presenting two means, or just the difference between means? Surely that would tell the reader nexttonothing?
21st February 2016 at 7:59 am #476Sarah WilliamsMemberI have decided to save you time and read your article ‘The widespread abuse of statistics by researchers: what is the problem and what is the ethical way forward?’ instead, as it discusses this topic.
21st February 2016 at 10:48 am #475Stephen GorardParticipantWhat you are saying is that the sample is not randomly allocated. Therefore even if ANOVA worked as you imagined it could not be used here. You must not use it and your supervisor is clearly wrong to demand it (even if it would have worked had you had a random sample).
And no. pDH is not the same as pHD. The chance of dying if you are executed is very high, but the chance of anyone dying having been executed is very small. Surely you can see that the 2 things are completely different.
21st February 2016 at 10:51 am #474Stephen GorardParticipantYou would present the means, SDs and differences between them as an ‘effect’ size, along with the number of cases. This would allow the reader to judge the substantive importance of the finding.Telling them something based on a test that is not appropriate for these data and then misdescribing the result would be far worse.
Or try presenting the number of counterfactuals needed to disturb the difference (see attached).
29th February 2016 at 4:34 am #473Sarah WilliamsMemberThank you for taking the time to reply.
I certainly do not think that the two probabilities are the same. I have never actually mentioned what I think the probability of the alternate hypothesis being true is, as we can’t determine that from NHST.
Just that the fact that the probability of the null being ‘true’ is less than .05, I have been taught that this is a form of support for the alternate hypothesis, although it doesn’t give a probability.
Of course ANOVAs don’t exactly say a whole lot on their own so other statistics such as effect sizes and confidence intervals are included also.
I’ve talked to my stats lecturer about using ANOVAs in samples which are not randomly allocated and he said that it is possible to use them in quasiexperimental designs although of course care must be taken in the interpretation. I’ve done some research into this and it appears there are different viewpoints on this issue.
29th February 2016 at 7:55 am #472Stephen GorardParticipantI am afraid your lecturer is being paid under false pretences then. It is easy to see why. Just consider what the p value would be the probability of if the cases had not been completely randomised.
And the H I mentioned was not the alternate. It was the null. Even if you could use an F test, the p value it would generate would tell you nothing useful. You want to know the p of H, given the data. But F test tells you prob of data given that H is true,


AuthorPosts
 You must be logged in to reply to this topic.