29th September 2011 at 7:05 pm #3081
use this to post questions specifically about SPSS4th October 2011 at 11:53 am #3093Jan WinterMember
I’m looking into what method of analysis (Discriminant function analysis vs. Naive Bayes classifier) is more effective to predict whether two or more offences have been committed by the same serial offender.
In order get a more objective comparison, I’d like to run ROC analysis for the predicted probabilities for both methods.
My problem is the following:
Regardless whether I choose the leave-one-out cross validation (LOOCV) for classification or not, the saved probabilities of group membership are identical in SPSS 19.
This is rather puzzling, since the full model classifies 52% of the cases as belonging to the correct series, where the LOOCV yields only 27% correct classifications.
I’ve looked everywhere but can neither find an answer nor a syntax that helps me to extract these LOOCV-probabilities.
It’s quite an urgent problem. Any help would be greatly appreciated!!
Jan6th October 2011 at 8:53 am #3092
Sorry but I don’t know the answer – I don’t know much about ROC analysis and I’ve never even heard of Naive Bayes Classifier.
andy6th October 2011 at 12:00 pm #3091Jan WinterMember
Thanks for replying!
The problem is solved, SPSS does not provide leave-one-out cross validated probabilities (for DFA), and I couldn’t find a syntax that does.
However, thanks to this script (Discriminant Analysis in R), I was able to get the desired probabilities in R.
We can’t wait for your new book!
Jan4th November 2011 at 9:38 am #3090AlissaMember
I am trying to model contrasts for both WS and BS effects in a repeated measures design using SPSS syntax. Let’s assume that as WS factors I use 5 different picture categories (pic1, pic2, pic3, pic4, pic5) and as BS factors 2 matched groups (group1, group2). I expect a main effect of picture category that follows a quadratic trend (4, -1, -6, -1, 4), a main effect of group (1 -1), and an interaction of both factors.
I tried using your MANOVA approach, but it doesn’t work if I assume a non-linear trend for picture category. So, now I turned to the LMATRIX, MMATRIX approach, but SPSS won’t give me significance tests for all three expected effects. Could you take a look at the syntax?
pic1 pic2 pic3 pic4 pic5 BY group
/WSFACTOR = piccategory 5 Simple(1)
/LMATRIX = group 1 -1
/MMATRIX = ‘Quadratic Test for picture category’
/METHOD = SSTYPE(3)
/INTERCEPT = INCLUDE
/CRITERIA = ALPHA(.05)
/WSDESIGN = piccategory
/DESIGN = group.
Thank you very much!
Maimu30th November 2011 at 5:43 pm #3089Krista RitchieMember
I need confidence intervals around median scores for groups that I have compared with nonparamtric analyses….I can’t find how to request Median Confidence Intervals from SPSS. Can someone please advise?
Krista19th December 2011 at 2:26 pm #3088
Pretty sure SPSS doesn’t do this, but in R you could just bootstrap the median to get a bootstrapped CI (see http://jep.textrum.com/index.php?art_id=33)
andy19th December 2011 at 3:05 pm #3087Krista RitchieMember
Thank you for your response. I did a bit of a ‘work around’ in SPSS to get the Median CI. Under the ratio command, I specified the denominator to be 1 and requested median with CI under statistics. The ratio remains the group median score and a CI is reported!
Krista19th April 2012 at 9:13 am #3086Krupa ShethMember
I am running the analysis on a sample size of 32 of three groups for a test-retest relability. I first want to find out whether the data is normally distributed.
I have two ideas – one is using through Analyze>Explore but got the tests as being all significant (ie data not being normally distributed). I looked at mainly the Shapiro-Wilk tests as the sample size Non Parametric Tests>1-sample KS test I got varied results.
The issue is that first method using the explore function I understand uses an additional correction which is more conservative and explains why I am more likely to have significant results. Is there any way to use a less conservative approach for my groups?
Any help is appreciated25th April 2012 at 12:56 pm #3085Rebone GcaboMember
Hi there Andy
I am developing a measurement tool which needs to be validated. I have done factor analysis but would like to do a mediation and SEM. how do i do these calculation on SPSS?2nd May 2012 at 8:48 am #3084AnonymousInactive
SPSS won’t do it in its stand-alone version… you either need to buy (or install if you already have it) the AMOS module to do any SEM analysis.
regarding your question on mediation… do you mean mediation among latent variables (so mediated SEM models) or just mediation in general?2nd May 2012 at 9:09 am #3083Rebone GcaboMember
I mean mediation in general. the variable in my study is social interaction among project team members. I further want to test for mediation where social interaction is mediated by cognitive style and project team performance. data was collected among 400 project team members and leaders and now would like to do an SEM mediation analysis8th May 2012 at 11:04 am #3082Kyriakos AntoniouMember
I’ve ran a bootstrap multiple regression in SPSS using the bootstrap module. However, i have some questions regarding the SPSS output and how to report results.
How do i report the multiple correlation coefficient (R square) and the significance of the F-ratio for the overall model for the bootstrap model? Do i use the R square and significance test from the original non-bootstrapped multiple regression? SPSS prints an R square and a significance value for the F-ratio only for the original non-bootstrapped multiple regression but not for the bootstrapped model. Or do i report the results of both models (bootstrapped and non-boootstrapped)?
I am asking this because with the non boostrapped model (5 predictor variables) i get only one significant predictor and marginal significance for the overall model. However, when i do the bootstrap multiple regression i get 2 significant predictors. So, (i might be really wrong) but shouldn’t the overall fit of the model also improve with the new bootstrap results?
Also, another question with regards to interpreting the results of multiple regression in general. How do i interpret a multiple regression when the overall model is non-significant or marginally significant but some of the predictors when looking at the coefficients table are significant? Can we stil make the claim that these variables are significant predictors of the dependent variable or is this claim kind of compromised due to the non-significance of the overall model?
I’m really looking forward for your help!
Thanks a lot in advance,
- The forum ‘Default Forum’ is closed to new topics and replies.