Regarding this paper (Kriegeskorte et al.), I noticed that they use
ordinary least squares (OLS) for their estimate in the situation where
they introduce autocorrelation in the errors (in the supplementary
material). The claim is that such an autocorrelation makes estimates
of orthogonal covariates non-independent, but I wonder what happens if
one uses the maximum likelihood scheme to implement generalized least
squares.
Furthermore, I just cannot see as self-evident that the specific form
of correlation structure that they use in the simulation (temporal
autocorrelation) would make estimates of orthogonal covariates
correlated. In their simulation, I just see that the estimates inside
the ROI and outside the ROI becomes more variable, which is what you'd
expect by using OLS in such a case. But there isn't any shift of one
distribution relative to the other. Why should this be the case?
temporal autocorrelation is uniform across all covariates of their
example.
Is there anyone sharing these doubts?
R. Viviani
Dept. of Psychiatry III
University of Ulm, Germany
Quoting Karl Friston <[log in to unmask]>:
> Dear Emiliano,
>
>> I'm writing to you, because I have just read this paper
>> (http://www.nature.com/neuro/journal/v12/n5/abs/nn.2303.html)
>> and I have a question for you. If I understand correctly (see also
>> supplementary materials, page 1)
>> the Authors demonstrate that when we have unequal variance in two
>> conditions (e.g. a lot of repetitions for cond A and just a few for
>> cond B),
>> "orthogonal contrasts" can produce biased results.
>> My question is:
>> If I use sphericity correction in a second level analysis, will
>> this ensure that my SVC/ROI definition using "orthogonal contrasts"
>> will yield to unbiased statistics?
>> Even more naifly, does SPM do all necessary corrections so that my
>> data/DM are essentially like if I had "equal variance" to start with?
>> (i.e. contrast-vector orthogonality is sufficient to ensure
>> independent statistics).
>> I hope this makes sense
>
>
> Yes it makes sense and no; contrast-vector orthogonality is not
> sufficient to ensure independent contrasts.
>
> However, in practice this is usually a trivial issue. The key thing to
> remember here is that a contrast is a mixture
> of parameters estimates (i.e. c'*B), where c is the contrast-matrix.
> Under the maximum likelihood scheme used
> by SPM this means the covariance of the contrasts (under the null
> hypothesis) is
>
> cov(c'*B) = c'*B*B'*c = c'*C*c
>
> where the conditional covariance of the parameter estimates B is
>
> C = inv(X'*P*X)*X'*P*Y*Y'*P*X*inv(X'*P*X)
> = inv(X'*P*X)
>
> and P = inv(S) is the precision (inverse covariance S) of the errors
> and S = Y*Y' under the null hypothesis
>
> This means that c has to comprise eigenvectors of C to ensure the
> contrasts are orthogonal. This is assured for
> all c when X is orthogonal and S has equal variances along its leading
> diagonal. However, when S contains unequal
> variances, only the contrast-vectors
>
> c' = [ 1 0 0 ...
> [ 0 1 0 ..
> [ 0 0 1 ..
>
> are eigenvectors. This means to use one contrast as a localizing
> contrast for another you should select first-level contrasts
> that summarize the effects you are interested in (e.g., two main
> effects) and then use c' = [1 0] to constrain the search for
> c' = [0 1] or vice versa. If you do this, then you can model unequal
> variances with impunity.
>
> Having said this, I would not worry. You would have to contrive
> simulations very carefully to introduce correlations
> among contrasts based on orthogonal vectors in a second-level analysis.
> This is because the implicit ANOVA will
> be orthogonal in its design (because each subject contributes the same
> number of summary statistics) and the
> non-sphericity should be mild in well-designed experiments (to the
> extent one might ask why an author needed to
> model unequal variances in the first place)
>
> Even in first-level designs the effect of serial correlations will be
> small because the regressors we use are generally
> smoother than the serial correlations (this means X'*P*X is roughly
> orthogonal, provided X is).
>
> If you are worried that non-orthogonal contrasts can induce biased
> selection, you can always orthogonalise your
> contrasts using the conjunction facility (i.e. select multiple
> contrasts while holding down the control key). SPM will
> then create contrasts that are serially orthogonalised and can be used
> as localizing contrasts (or summary statistics)
> that are exactly orthogonal.
>
> I hope this helps - Karl
>
> (NB: The argument in Kriegeskorte et al (2009) pertains only to the use
> of localizing T-contrasts to restrict search
> volumes; it has no implications for F-tests).
|