Dear Patrick,
> First, I know temporal smoothing (TS) has been applied before any other
> analysis is performed. This TS, is it in function of the haemodynamic
> delay or not ? (And if not, where comes haemodynamic delay then in the
> picture?) I found some formulas about it in the SPM book on page 86-88,
> but I'm not quite sure about them. And which other smoothings are
> related to TS and how (Spatial smoothing, ....). What are exactly the
> consequences of these smoothings on the degrees of freedom (something
> done automatically by SPM, but I'm eager to know how it happens
> exactly)? I know it depends on the number of scans taken from a subject
> (supposing a single subject study, one run for clarity), but does it
> vary when it's a study dealing with two conditions or when e.g. 4
> conditions are studied. And if multiple runs are being used, is it
> only the multiple of it or not?
Firstly temporal filtering (i.e. smoothing or low-pass filtering and
high-pass filtering or 'drift removal') is distinct from convolving the
stimulus function (e.g. stick or box-car function) with a hemodynamic
response function (HRF) or basis set modeling voxel-specific HRFs. The
delay is embodied in the latter not the former. The effective degrees
of freedom are simply a function of the serial correlations in the
time-series and can be thought of as the number if temporal resolution
elements (RESELS) in an analagous way to spatial smoothing (i.e. Eff.
d.f. is roughly the length of the time-sereis divided by the FWHM of
the temporal smoothness). It therefore depends on both the number of
scans and the seriel correlations. Smoothing is a device which
regularises the correlation structure so that its estimation is more
robust.
> Then I've some questions about those t-test-like t-tests used in SPM.
> When e.g. a contrast C, namely A - B, is offered, your t-value for
> every vovel v will become:
>
> t(v, C) = (Beta(v,A) - Beta(v,B))/Error(v, C)
>
> with Beta(v,i) the regressioncoefficient for condition i in the GLM and
> Error(v, C) the error between the modelled signal and the original
> signal (for clarity zero mean supposed), given the contrast C. This
> error, is that a squared error, i.e. the square of the difference
> between the signal-activity and the modelled activity, summed over all
> scans for that voxel ? Or, is it a root squared error (thus the root of
> previous formula), the used formula in the SPM book is not quite clear
> about that. I suppose last option is the correct one, but I'm not
> quite sure.
The denominator is the standard error of the contrast (i.e. the square
root of the estimated error variance over scans). This is a function
of the serial correlations (V), sum of squares of the residuals (r'r)
and the effective degrees of freedom trace(RV).
t = C'.Beta/SE, SE^2 = (r'r)/trace(RV)*C'*pinv(X)*V*pinv(X)'*C (1
> And what about it, if you have three conditions in your contrast C,
> let's say A + B - 2D. Is in that case the t-test equal to
>
> t(v, C) = (Beta(v,A) + Beta(v,B) - 2Beta(v,D))/Error(v, C)
The standard error is itself a function of the contrast; here
C = [1 1 -2] in (1) above
> Another question, again related with the degrees of freedom: how are
> they calculated exactly for their use in the transition from t-tests to
> Z-scores ? Normally a model (with two conditions), looses 3 degrees of
> freedom by means of its mean, and the the two regressioncoefficients.
> So when having 120 scans, you got a df of 117, i.e. sqrt(117) must be
> used as factor, or are there also other Bonferoni-like influences (the
> smoothing parameter s e.g., although I've no idea if it's related to
> the temporal, the spatial smoothing or I don't know what other kind of
> smoothing). And, does it matter if you used 2, 3 or more conditions in
> your study to determine the df ? I suppose so, since you've less scans
> per condition in case you've more conditions and vice versa, but I
> found no evidence for it in literature.
The d.f. does indeed decrease with the number of parameters you have to
estimate. I would remember that the statistics in SPM are no different
from conventional parameteric statistics using the general linear
model. The only difference is that when it comes to making corrections
to the p values Gaussian field theory is employed (this is after
parameter estimation and generation of the t statitsic).
I hope this helps - Karl
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|