Dear Pierre,
> I have some question concerning the analysis of fMRI time series.
>
> 1) In spm96, low frequency variations are removed by entering
> covariates of no interest (a set of cosine functions) in the design
> matrix. In spm99, this is done during filtering data (done in
> spm_filter.m and spm_make_filters.m). Is it only for programming
> reasons or is it a benefit for the analysis?
They are mathematically equivalent. The High-pass component of the
SPM99 filter is simply the residual forming matrix of the SPM96 drift
terms. The advantage is that the design matrix in SPM99 is easier to
visualize and there are fewer parameter estimates to store.
> Moreover, I think that the
> design matrix part Xc ( n rows= number of scans, p columns = number of
> cosine functions modelled) must so that Xc'*Xc is an identity matrix.
> Is it right?
It is - because the Xc comrise unitary orthnormal regresors (a DCT
set)/
> 2) Concerning the treatment of autocorrelation by smoothing the time
> series, I am not clearly understanding the benefit of such treatment.
> If it is no temporal correlation in the time series, we introduce a
> correlation. If it is a correlation (or more generally, if the
> covariance matrix of errors is not identity matrix), smoothing will not
> remove the initial correlation (if, eventually, the covariance matrix
> of the error is not symmetrical, I think that smoothing by a Gaussian
> filter will have a damaging effect).
Smoothing makes all the temporal correaltions the same (or similar).
Some voxels may have high temporal correlations some will have low.
After Low-pass filtering (smoothing) all the voxels will have about the
same correlation structure and we can use the same design matrix and
correlation assumptions for all voxels. The parameters are estimated
with slightly less efficiency but the standard error is much more
robust.
> 3) In spm99, one option is AR(1) to remove the autocorrelation. I
> looked in the code of spm, and , if I do not misunderstood, the
> autocorrelation is calculated using the original data values. However,
> I believe that the linear model needs that errors are independent but
> not that values are independent. For example, if we have a sequence of
> scans : condition A, condition B, A, B, A,B ,... (and let rCBF
> increases during condition B versus condition A). It will be a
> correlation between scans but for me, it does not exclude the use of
> linear model. I think it will, perhaps, be better to apply linear model
> and determine autocorrelation of errors (residues). If an important
> correlation is occurring, data can be re-analyse using the
> corresponding covariance matrix as a covariance matrix for the errors.
> I believe that the new errors will be not correlated.
In fact SPM99 uses the raw data because the correlations among the
residucals (cov(r) = r*r') are NOT a good estimate of the error
(cov(e) = e*e' = V) correlations. This can easily be seen mathematically.
r = (1 - X*pinv(X))*y = R*y
cov(r) = R*V*R'
this is not cov(e) = V because you have induced correlations during the
estimation procedure.
As noted in previous emails the next verion of SPM will estimate the
correlations using an EM algorithm that properly accounts for real
activations in the data.
I hope this helps - Karl
|