- This topic has 6 replies, 2 voices, and was last updated 8 years, 9 months ago by Anonymous.
6th July 2012 at 3:10 pm #2262
When using Structural Equation Modelling, correlations between latent variables in the measurement model are desirable or not?9th July 2012 at 2:51 am #2268AnonymousInactive
depends on whether you’re hypothesising them in your model or not… there’s nothing inherently good or bad to them mathematically speaking.9th July 2012 at 6:17 am #2267
my work aims to test the validity of a theoretical construct; namely 6 different factors affecting the voting behaviour. So I should not bother the correlation coefficients among the latent variables (which are insignificant by the way) just focus on the casual model…9th July 2012 at 8:14 am #2266AnonymousInactive
that sounds like a good plan…. i think. well, ok… lemme rephrase that. as a methodologist, the correct thing to do would be to test both the model with those correlations constrained to 0 (which is what i guess you are going after) or with them being freely estimated and compare the fit of both models. if it is true that your model with the correlations set to 0 fits the data considerably better then you can just ignore them and go in your merry way. IF, however, you let those correlations to be estimated and you find that it provides you a better fit than you had initially expected, then i’m afriad you cannot just ignore them. you’ll either need to be able to explain them or change your structural model.
it’s really up to you on how to proceed but if i were a reviewer of your paper (and i do review the methods section of papers relativaly often) the first thing i would ask you would be: have you considered and tested alternative models to your data?9th July 2012 at 8:47 am #2265
thanks for the answers…good point… as soon as the CFA and casual model is identified and produce acceptable fit values I prefer to keep it this way…. the alternative model thing… I have tested the theoretical construction with two models: in model 1, IVs (6 factors of voting) directly estimates the DV (vote). In model 2, two of the latent variables indirectly predicts vote: (personal values —> political values —> VOTE and political values —> ideology —> VOTE). Results provide similar estimate and fit indices values. Would you (as a referee) be satisfied with such kind of a justification or ask for more alternative models?10th July 2012 at 6:02 am #2264AnonymousInactive
uhmm… i’m not sure if i’m following what you’re saying. if you test an alternative model which provides acceptable fit, then how are you deciding between the alternative model and your original model? ideally, your original model should provide better fit or, at least be more parsimonious than the alternative model. if neither fo these is more parsimonious and provides similar fit, then which one would you choose?10th July 2012 at 6:32 am #2263
could not express my point clearly…my mistake… so here is what I have done…I have tested my first model (model 1, IVs (6 factors of voting) directly estimates the DV (vote)) on 4 different parties. So actually I have tried the model 1 with four different data sets which belong to four different political parties. It performed well. I have also created another model, model 2, which proposes an alternative/complex casual model compared to model 1. I have also tested model 2 with four different data sets. It also performed well. The reason for the second model is my personal “theoretical greediness”: literature argues that personal values shape political values; and political values shape ideology. I have tested this point in model 2. It may sound excessive but it is fun to see that both models work quite well with 8 different data sets… 🙂
(For a similar approach: Leimgruber, P. (2011). Values and Votes: The Indirect Effect of Personal Values on Voting Behavior. Swiss Political Science Review, 17, 107-127.)
- You must be logged in to reply to this topic.