Pierre -
> We are performing the analysis of an experiment of event related fMRI.
> We have 4 conditions (4 types of events), the onsets for event are
> random, and a whole brain is scanned within 2 seconds. We modelled the
> response with hrf and temporal and amplitude derivatives. We looked at
> the colinearity between the different effects modelled in the design
> matrix and we have a lot of questions.
>
> 1) In the design matrix, the 3 functions: hrf, temporal and amplitude
> derivatives for one type of event are not orthogonal (covariance is
> slightly different from 0). I presume that this slight collinearity is
> introduced during the convolution (of basics functions with the stick
> function of SOA) and resampling (for allocating a value for each scan)
> . Is it a problem for analysis?
The basis functions are explicitly orthogonalised in a high resolution
timespace,
but correlation may be introduced by the downsampling every TR to create
the
covariates, which is what I assume you are seeing.
I don't know whether you regard it as a "problem" - presumably the
correlation
is quite small - but you could explicitly orthogonalise the covariates
for the partial
derivatives with respect to the canonical HRF, for example
("justifiably" forcing the
shared variance to be assigned to the HRF, which is normally your main
interest).
> 2) The covariates modelling hrf1, hrf2, hrf3, hrf4 (the hrf for each
> type of event) are not orthogonal. I think it is no surprising because
> the onset for each type of event is randomly chosen : so some
> correlation can occur. I think that could be a real problem because
> some effect will be not detect with the linear model. It is possible
> to orthogonalise hrf2 versus hrf1, then hrf3 versus [hrf1 hrf2ortho]
> and so on. However, I presume that hrf2 derivatives cannot be further
> used with hrf2 orthogonalised with hrf1. What can we do with the
> derivatives?
This is not surprising - even with random onsets, convolution can
introduce considerable
correlation between covariates for different event-types. Again, the
extent to which
it is a "problem" depends on the degree of correlation. Any results you
do find are
valid - in the sense that they show the orthogonal contributions of each
covariate.
However, you will not have much power to detect such contributions if
the correlation
is high (ie the orthogonal part is small). The only way round this is to
re-design your
experiment (ie onsets) to minimise correlation when convolved with the
HRF.
You could "prioritise" one event-type with respect to the others,
orthogonalising
the covariates as you suggest, but this is just one of several possible
orthogonalisation
schemes, which you need to justify (ie there is no "absolute way" to
attribute shared
variance).
> 3) We also use filtering with cosine functions. Again, these
> variables correlate with some columns of the matrix of interest. Is-it
> a problem if these nuisance covariates are orthogonalised with the
> column of interest?
If the covariates really are "nuisance" variables, I should think you
would NOT
want to orthogonalise them with respect to your covariates of interest -
because you are normally interested in the effects of interest that
cannot be
explained by correlated nuisance factors.
Rik
--
---------------------------8-{)}-------------------------
DR R HENSON
Institute of Cognitive Neuroscience &
Wellcome Department of Cognitive Neurology
17 Queen Square
London, WC1N 3AR
England
EMAIL: [log in to unmask]
URL: http://www.fil.ion.ucl.ac.uk/~rhenson
TEL1 +44 (0)20 7679 1131
TEL2 +44 (0)20 7833 7472
FAX +44 (0)20 7813 1420
---------------------------------------------------------
--
|