Methodspace - home of the Research Methods community

Information

Quantitative research

This research paradigm can help measurably understand our social endeavours. Evidence-based decision-making and programmatic interventions are unrivalled in today research society. Let's all in unison contribute to the advancement of this paradigm.

Website: http://www.methodspace.com/group/quantitativeresearch
Members: 238
Latest Activity: Dec 8

Discussion Forum

Why Physics Envy will Persist in Political Science

Started by William J. Kelleher, Ph.D. Oct 23.

CFA MODEL FIT (URGENT HELP NEEDED PLS) 3 Replies

Started by Mouhamed Thiam. Last reply by Nebojsa Davcik, Ph.D. Jul 4.

PCA or EFA (Principal Axis Factoring) to purify my scale ? 5 Replies

Started by Redouane Khiri. Last reply by Redouane Khiri Jun 23.

Comment Wall

Add a Comment

You need to be a member of Quantitative research to add comments!

Comment by Cathy Evans on November 24, 2014 at 17:34

Stories of African American Graduates of Online Doctoral Programs Needed!

 

Cathy Evans, a doctoral student at Walden University, is conducting a study.  African American doctorate recipients who attained a doctorate from online doctoral psychology programs at Title IV privately operated for-profit, online institutions that practice open admission (FPCUs), within the past five years, are needed to share their experiences and knowledge of completing an online doctoral psychology program at an FPCU.  The study seeks to explore the academic achievement of African American students who completed online doctoral programs and attained a doctorate in psychology at Capella University, Walden University, the University of Phoenix, or the University of the Rockies.

 

Invest a little time from your schedule by contributing your knowledge to an emerging body of knowledge.

 

To participate, you must be (a) African American and (b) must have attained a doctorate in psychology from an FPCU within the past 5 years.  The time, involved, will consist of the following:

  • One nonpaid 60- to 90-minute in-depth interview: face-to-face, live chat, Skype, OR  telephone, and
  • The time you spend reviewing, your interview summary. 

 

For more information, please contact Cathy Evans at lhi_help@netzero.com.

Comment by Stephen Gorard on July 3, 2014 at 11:59

A simple breakthrough would be for SPSS (and other widely used software) to distinguish between those processes based on sampling theory (standard errors, significance etc.) and those that are simply analyses with numbers. The default should be the latter. Thus the menus would have comoparing means, correlations, regression and so on but not t-tests, F-tests etc. If the user then states that they have a complete random sample with no error (i.e. never) then they can change the default. This would help new researchers see the futility of standard errors, and that good analysis with numbers is something separate.

Comment by Mehdi Riazi on February 13, 2013 at 23:06

Dear all,

A student of mine has collected Think-aloud protocols from student readers. He has codified the TAs using three categories and has tallied the number of strategies in each category. We are interested to find out if there is any relationship between the number of strategies used in each category and some of the students' categorical variables (gender, age group, nationality, etc.). We are thinking of using Crosstabs (strategy category vs. categorical variables). I have two quetions and appreciate your comments:

1) Is Crosstab an appropriate test to use for this purpose

2) If yes, in some categories we have different numbers of participants. For example, 17 male but 11 female. Would this be a problem?

Thanks;

Mehdi

Comment by M. Monique McMillian-Robinson on January 5, 2013 at 19:24

Hi,

I am looking for a recent reference that discusses the limitations of conducting survey research with nationally representative samples.

Monique

Comment by MJ on October 5, 2012 at 14:45

Hi all

Is there anyone who can help with Yates correction example?

If the probablity of homozygote recessive (aa) from parents of (Aa) and (Aa) be 0.25, then the probablity of 2 out of 4 mating aa would be calculated using paired proportion

I wonder if we want to calculate the Z-Ratio by hand to compare to the software what would the numerator be ?

2/4-1/4? then if yes why the results of the SAS output is different from hand calculation

I appreciate a response

thanks

Comment by Oscar on April 17, 2012 at 7:22

well, there's a very simple reason for that: because that is not our purpose. methods and techniques developed by statisticians usually trickle down into other sciences to inform their practice, but then there's this entity called "methodologist" which acts a "knowledge translator" of sorts trying to shape methods so that they become adequate to their substantive area of research. the problem is that methodologsists in the social/behavioural/health sciences are rarely trained in formal mathematics (with some notable exceptions of course, such as biostatisticians). most people go through the standard graduate-level courses on applied statistics which emphasize software use, with almost exclusive reliance on hand-waving arguments and metaphors to try and explain what goes on the little black box we call SAS, SPSS, JMP, Minitab, etc...  as a result, misconceptions generate misconceptions that perpetuate themselves over and over again until one looks at the literature in disbelief of what people are actually doing.

the statistical science has become a cornerstone of the research enterprise, creating a demand of people proficient on these methods. nevertheless, the number of PhDs in statistics (whether pure or applied) is still relatively small (math has never won the popularity contest of things people like to study) so simply out of sheer need statisticians "delegate" the tasks of data analysis to methodologists, quantiative analysts, etc who may or may not be able to fully grasp what they're supposed to be doing.... and as you can imagine these people later become professors or instructors who teach the same things to their students who then add a little mix of their own and that keeps on rolling down so we end up having (even in published, reputable introductory textbooks in statistics) strange claims such as:

- regression needs all variables to be normally distributed (WRONG, the distributional assumptions on any linear model are only on the RESIDUALS)

- analysis of covariance using the pre-test as covariate takes care of pre-existing conditions to make the groups you are comparing more similar (WRONG, ANCOVA cannot fix lack of randomization regardless of how many times people say so)

- a statistically significant ANOVA implies significance of the post-hoc tests (WRONG, ANOVA can very well detect a linear combination among the predictors that is not expressively tested in the usual Tukey's or whatever things SPSS spits out as an output)

.

.

.

.

and dont even get me started on hypothesis testing!! although Andy has already written a couple of posts that address this issue in a much more detailed way

Comment by MJ on April 16, 2012 at 22:16

Hi that is very true, as you said statistician publish the facts for themselves while clinicians want you to do things that are outdated

I wonder why statisticians go through too much pain to develop new methods then if they are not going to be used at some point in medical research?!

Comment by Oscar on April 16, 2012 at 18:54

it seems like that's what (s)he's after. it would have been nice for you to do a power calculation before and include that in your stats section. now if you did it and you did not include it i think it would be good for you to do so. but it seems like this person is attempting to use power analysis as a tool for data analysis through these post-hoc/post-experiment power calculations that for reasons beyond my understanding are very popular...

 

... in any case, i would skip the question. when i've been faced with questions like that i direct them towards that citation or quite a few other ones that deal with the same issue. i also tend to casually comment that it seems interesting to me that journals that deal with applied research methodology tend to advocate this procedure sometimes (because there are published articles out there that think this is still worthwhile to do) whereas the journals where statisticians or mathematical statisticians publish (American Statistician, Journal of the American Statistical Association, Biometrika, etc.) uniformly have condemened doing this over the years. food for thought i think.

 

in any case, yes, GPower should be able to handle it...i think. i havent used it in a while, heh. :)

Comment by MJ on April 16, 2012 at 13:27

Hi Oscar

Thanks for your attention. I quickly looked at the paper, so the reviewer wants me to do a "post-experiment" power calculation, which this paper calls "abuse of power". Is that what you mean

does that mean I can just skip his question, so if I ever want to do this I would have to use the GPower right

thanks

Comment by Oscar on April 16, 2012 at 7:31

Hoenig, J.M. & Heisy, D. M.(2001) The Pervasive Fallacy of Power Calculations for Data Analysis. The American Statistician, 55(1), 19-24.

 

 

      so yeah, i added the extra bolded and underlined terms so you can kindly direct whoever told you that to a very well-written paper where both analytic and simulation results show that what this person is asking you to do is essentially a waste of time. this is once again one of those situations where people who dont understand how things work mindlessly repeat misconceptions over and over again in some bizarre attempt to make them sound legit.

 

Members (238)

 
 
 

Follow us:

© 2014   Created by SAGE Publications.

Badges  |  Report an Issue  |  Terms of Service