>
> > Dear Grant,
> >
> > >I need help in interpreting the output of an analysis with a co-variate of
> > >interest. I want to determine the correlation between the difference in PET
> > >images between two conditions to the difference in a self-report covariate
> > >collected during each condition. My question is how can the Z scores given
> in
> > >the Output table be converted into correlation coefficients.
> > >A standard Fisher Z score to R table yields correlation coefficients that
> seem
> > >overly high. For example, a Z score of 2.0 yields a correlation
> > coefficient of
> > >0.965. That is pretty high for a behavioral co-variate, and some of the
> > >Z-scores are > 4.0, which would be r > 0.999 !
> > >Would it be more appropriate to take either the corrected or uncorrected (for
> a
> > >priori regions) p-value and then use that to find the correlation
> coefficient,
> > >using the number of subjects - 2 for the degrees of freedom. Under these
> > >conditions, a p-value of 0.01 (corrected) and 7 subjects would equal to r =
> > >0.8745, which seems more reasonable.
> >
> > The Z-scores SPM reports cannot easily be transformed into correlation
> > coefficients. It is easier to get the adjusted data and do it post-hoc: Go
> > to results and plot the adjusted data. You will then find a variable y in
> > Matlab workspace. This matrix contains the adjusted (for globals, subject
> > effects) data. You can now look for the correlation between y and your
> > behavioural data.
> >
>
> Just a short follow-up to Christian comment : make sure your covariates
> of interest are not correlated to the covariates of no interest before
> you apply Christian's procedure.
> JB
My two cents would be that if your covariate(s) of interest is
correlated with those of no-interest (and this probably is the case
if a global signal covariate has been included) then you can
orthogonalize both the raw data and the covariate of interest with respect
to those of no interest. Alternatively, you can calculate the partial
Rmul^2 directly from partial F-values via:
Rmul^2 = 1 - df_error_full/(F*df_error_reduced + df_error_full*(1 - F))
where F is the partial F value for the covariates of interest (could be
one or more), df_error_full is the error degrees of freedom of the
full model (i.e., with all covariates including the one of interest
as well as those of no interest), and df_error_reduced is the error
degrees of freedom of the reduced model (i.e., including all covariates
except the one(s) of interest). I trust that these F-values
are available in the SPM output.
For example (ignoring the complexity of autocorrelation), say there are
100 observations, 10 covariates of no-interest, and 10 of interest. Then
df_error_reduced=100 - 10 = 90, and
df_error_full= 100 - (10 + 10) = 80. If the F value is 1, then
Rmul^2 = 1 - 80/(1*90 + 80*0) = 1 - 80/90 =1/9, which implies
Rmul = 1/3.
One should keep in mind that the distribution of this R value under
the null hypothesis depends on both df_error_full and the difference
between df_error_full and df_error_reduced.
Sincerely,
Eric
Eric Zarahn
University of Pennsylvania
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|