This research paradigm can help measurably understand our social endeavours. Evidence-based decision-making and programmatic interventions are unrivalled in today research society. Let's all in unison contribute to the advancement of this paradigm.
Website: http://www.methodspace.com/group/quantitativeresearch
Members: 234
Latest Activity: 19 hours ago
Started by William J. Kelleher, Ph.D. 19 hours ago.
Started by Mouhamed Thiam. Last reply by Nebojsa Davcik, Ph.D. Jul 4.
Started by Redouane Khiri. Last reply by Redouane Khiri Jun 23.
Add a Comment
A simple breakthrough would be for SPSS (and other widely used software) to distinguish between those processes based on sampling theory (standard errors, significance etc.) and those that are simply analyses with numbers. The default should be the latter. Thus the menus would have comoparing means, correlations, regression and so on but not t-tests, F-tests etc. If the user then states that they have a complete random sample with no error (i.e. never) then they can change the default. This would help new researchers see the futility of standard errors, and that good analysis with numbers is something separate.
Dear all,
A student of mine has collected Think-aloud protocols from student readers. He has codified the TAs using three categories and has tallied the number of strategies in each category. We are interested to find out if there is any relationship between the number of strategies used in each category and some of the students' categorical variables (gender, age group, nationality, etc.). We are thinking of using Crosstabs (strategy category vs. categorical variables). I have two quetions and appreciate your comments:
1) Is Crosstab an appropriate test to use for this purpose
2) If yes, in some categories we have different numbers of participants. For example, 17 male but 11 female. Would this be a problem?
Thanks;
Mehdi
Hi,
I am looking for a recent reference that discusses the limitations of conducting survey research with nationally representative samples.
Monique
Hi all
Is there anyone who can help with Yates correction example?
If the probablity of homozygote recessive (aa) from parents of (Aa) and (Aa) be 0.25, then the probablity of 2 out of 4 mating aa would be calculated using paired proportion
I wonder if we want to calculate the Z-Ratio by hand to compare to the software what would the numerator be ?
2/4-1/4? then if yes why the results of the SAS output is different from hand calculation
I appreciate a response
thanks
well, there's a very simple reason for that: because that is not our purpose. methods and techniques developed by statisticians usually trickle down into other sciences to inform their practice, but then there's this entity called "methodologist" which acts a "knowledge translator" of sorts trying to shape methods so that they become adequate to their substantive area of research. the problem is that methodologsists in the social/behavioural/health sciences are rarely trained in formal mathematics (with some notable exceptions of course, such as biostatisticians). most people go through the standard graduate-level courses on applied statistics which emphasize software use, with almost exclusive reliance on hand-waving arguments and metaphors to try and explain what goes on the little black box we call SAS, SPSS, JMP, Minitab, etc... as a result, misconceptions generate misconceptions that perpetuate themselves over and over again until one looks at the literature in disbelief of what people are actually doing.
the statistical science has become a cornerstone of the research enterprise, creating a demand of people proficient on these methods. nevertheless, the number of PhDs in statistics (whether pure or applied) is still relatively small (math has never won the popularity contest of things people like to study) so simply out of sheer need statisticians "delegate" the tasks of data analysis to methodologists, quantiative analysts, etc who may or may not be able to fully grasp what they're supposed to be doing.... and as you can imagine these people later become professors or instructors who teach the same things to their students who then add a little mix of their own and that keeps on rolling down so we end up having (even in published, reputable introductory textbooks in statistics) strange claims such as:
- regression needs all variables to be normally distributed (WRONG, the distributional assumptions on any linear model are only on the RESIDUALS)
- analysis of covariance using the pre-test as covariate takes care of pre-existing conditions to make the groups you are comparing more similar (WRONG, ANCOVA cannot fix lack of randomization regardless of how many times people say so)
- a statistically significant ANOVA implies significance of the post-hoc tests (WRONG, ANOVA can very well detect a linear combination among the predictors that is not expressively tested in the usual Tukey's or whatever things SPSS spits out as an output)
.
.
.
.
and dont even get me started on hypothesis testing!! although Andy has already written a couple of posts that address this issue in a much more detailed way
Hi that is very true, as you said statistician publish the facts for themselves while clinicians want you to do things that are outdated
I wonder why statisticians go through too much pain to develop new methods then if they are not going to be used at some point in medical research?!
it seems like that's what (s)he's after. it would have been nice for you to do a power calculation before and include that in your stats section. now if you did it and you did not include it i think it would be good for you to do so. but it seems like this person is attempting to use power analysis as a tool for data analysis through these post-hoc/post-experiment power calculations that for reasons beyond my understanding are very popular...
... in any case, i would skip the question. when i've been faced with questions like that i direct them towards that citation or quite a few other ones that deal with the same issue. i also tend to casually comment that it seems interesting to me that journals that deal with applied research methodology tend to advocate this procedure sometimes (because there are published articles out there that think this is still worthwhile to do) whereas the journals where statisticians or mathematical statisticians publish (American Statistician, Journal of the American Statistical Association, Biometrika, etc.) uniformly have condemened doing this over the years. food for thought i think.
in any case, yes, GPower should be able to handle it...i think. i havent used it in a while, heh. :)
Hi Oscar
Thanks for your attention. I quickly looked at the paper, so the reviewer wants me to do a "post-experiment" power calculation, which this paper calls "abuse of power". Is that what you mean
does that mean I can just skip his question, so if I ever want to do this I would have to use the GPower right
thanks
Hoenig, J.M. & Heisy, D. M.(2001) The Pervasive Fallacy of Power Calculations for Data Analysis. The American Statistician, 55(1), 19-24.
so yeah, i added the extra bolded and underlined terms so you can kindly direct whoever told you that to a very well-written paper where both analytic and simulation results show that what this person is asking you to do is essentially a waste of time. this is once again one of those situations where people who dont understand how things work mindlessly repeat misconceptions over and over again in some bizarre attempt to make them sound legit.
Hello, I am back with another Power question!
This time I have been asked a general power question and I am really confused about how to do this. here is the Q:
Given your small study size, and the fact that adherence to physical activity was low in both cases and controls, a type 2 error could occur. A power calculation should therefore be provided in the statistical methods section.
© 2014 Created by SAGE Publications.
You need to be a member of Quantitative research to add comments!