I'm not familiar with the way that SPSS does mutiple imputations, but
most software combines estimates that have a parameter and a standard
error.
SPSS imputes 5 datasets, then it calculates the parameter estimate
(the B), by finding the average.
Then it looks at the standard deviation of the estimates.
Then it calculates the standard error by taking the average of the
standard errors and adding something based on the standard deviation
of the estimates. It's described here:
http://sites.stat.psu.edu/~jls/mifaq.html#howto
All the tests that you mention don't have that. There's no standard
error for R2, or for adjusted R2, or for the change stats or the
Durbin-Watson test.
If you can get it from the data, you can find the average R and R2 -
however, average (R^2) will not equal (average R)^2, although they
should be close. You could also Fisher transform and take the average.
I think there's a way of combining F statistics in multiple imputation
- I'm not sure if SPSS does that but it's rarely done. You might need
to do that by hand, that will give you the p-values associated with
the R2 change (which is what you probably want). A bit of googling
found me two different sites where the question was asked, but no
answer: https://communities.sas.com/thread/9893?start=0&tstart=0 and
http://stats.stackexchange.com/questions/34069/mean-comparisons-following-multiple-imputation
. This paper seems to cover it, but I'm not sure, 'cos I'm too lazy to
read it (and much too lazy to code it.)
You've analyzed 5 datasets, so it doesn't show graphs, 'cos it doesn't
know which graph you want to show. Pick one of them. However,
heteroscedasticity is hardly ever an issue (unless your predictors are
highly skewed). If it is an issue, use a heteroscedasticity-robust
approach (actually, just do that anyway, if it's not an issue, it
won't make any difference).
Also: 5 datasets isn't very many. Have you checked that's enough? In
the early days of imputation, people talked about using 5, because it
was slow and hard, but computers are better now, so you're better
doing more. (I've done up to 50 sometimes).
Also 2: The Durbin-Watson is almost never useful. It only makes sense
if the data are in order of something (usually data collection). If
you reorder the data, the statistic will be different. Do you really
need it?
On 28 February 2013 07:25, Greig Inglis <[log in to unmask]> wrote:
> Hello all,
>
>
>
> I am trying to run hierarchical multiple regressions after using multiple
> imputation on SPSS 19.
>
>
>
> Although everything is present and correct for the original data and
> imputations 1 through 5, there is a lot of information missing for the
> pooled data – including R, R Square, Adjusted R Square, all of the Change
> Statistics and Durbin-Watson. I also requested a bunch of scatterplots to
> check the assumption of homoscedasticity and the like but these also aren’t
> shown for the pooled data.
>
>
>
> It was my understanding that it was the pooled data that I should be looking
> at, so I’m confused as to why these statistics aren’t included. Any help or
> advice would be much appreciated!
>
>
>
> Cheers,
> Greig.
>
>
>
> Greig Inglis
> PhD Researcher
>
> School of Psychological Sciences & Health
> University of Strathclyde
> Graham Hills Building
> 40 George Street
> Glasgow G1 1QE
>
> Take part in psychology research at Strathclyde:
> http://www.strath.ac.uk/humanities/schoolofpsychologicalscienceshealth/psychology/takepartinresearch/
>
> Facebook: http://www.facebook.com/#!/PsychAtStrathclyde
>
> Twitter: https://twitter.com/PsychAtStrath
>
>
|