Although I agree in general with your response, I think that you
overstate the benefits of temporal basis functions and the dangers
of selective averaging.
First the dangers of selective averaging. The problem here is one
of sampling. If the data are collected at 1-second fixed intervals
the sampling is almost certainly adequate sampling for the averaged
response to accurately represent the hemodynamic response. If the
TR is five, the sampling is sure to be inadequate. If one uses the
same definiton of the Nyquist criterion as used in image processing,
one would conclude that you need two samples per FWHM. The FWHM of
of the peak of the hemodynamic response is 5-6 seconds, so this says
that the maximum TR would be 2.5 to 3 seconds. This doesn't mean that
we can assume that a TR of 2.5 seconds is without problems. The phone
company oversamples by 50% to get good voice quality. However, I
wouldn't expect major errors at TRs of less than 2.5 seconds.
Temporal basis functions (by which I mean bases other than the sum
of delta functions) completely overcome these sampling issues if the
stimulus presentation intervals are randomly varied. However, this
comes at the cost of a very strong assumption - that every hemodynamic
response will be nearly the same and that these responses will be
nearly equal to the impulse response. This assumes that neuronal
firing is either very brief or that it is known and can therefore
be modeled by modifying the temporal basis functions. It is not
hard to think of complex behavioral paradigms where the experimenter
cannot predict precisely when or for how long neurons will fire.
If the response is poorly modeled by the temproral basis functions (and
many responses won't be because most of the bases proposed do not span
the space of possible functions), it is likely to be completely missed
Selective averaging combined with an F-test does not require any
assumptions about the response timing or shape.
My conclusion is that one must take an agnostic approach to this
problem. In experiments where temporal sampling is poor, temporal
basis functions must be used. In studies where the neuronal response
may be unpredictable, selective averaging should be used. In most
experiments, the tradeoffs are hard to quantify because we don't have
enough data yet.
--------------------------------------------------------------
John Ollinger
Washington University
Neuro-imaging Laboratory
Campus Box 8225
St. Louis, MO 63110
http://imaging.wustl.edu/Ollinger
> There are two components to this question. (i) have we tried
> rapid-presentation event-related fmri? (i.e. an experimental design
> issue) and (ii) have we implemented selective averaging using our
> event-related approach (an analysis issue)?
>
> (i) Yes we use rapid-presentation and stochastic designs and consider
> them to be very useful and efficient. There are a number of people
> working on the efficiency of stochastic designs with small SOAs
> including Eric Zarahn and Anders Dale. The conclusion is that smaller
> SOAs lead to more efficient designs. We usually adopt a lower limit of
> about 1 second (to avoid nonlinear saturation effects). Anders and his
> colleagues have shown that a 500ms SOA is viable in visual studies.
>
>
> (ii) We do not use 'selective averaging' and will not. The reason is as
> follows:
>
> Our general approach is to use temporal basis functions, that are
> convolved with a stimulus function to give explanatory variables in the
> design matrix. The stimulus functions can be a collection of 'stick'
> functions (event-related) or box cars (epoch-related). Temporal basis
> functions are central in that they allow for a graceful transition from
> FIR models to fixed-form response estimates. They avoid the problems
> of biased sampling associated with FIR motivated analyses (see below),
> yet retain their flexibility in modeling voxel-specific response forms.
>
> Selective averaging is the same as using the general linear model to
> estimate the finite impulse response (FIR) associated with each trial
> type. This is in turn equivalent to using temporal basis functions
> that comprise a series of delta functions at each TR following stimulus
> onset. The fundamental problem with this approach is that the data
> have to be acquired at these discrete time points, engendering a biased
> sampling of the peristimulus interval. Not only is there a biased
> sampling but the nature of this bias changes from slice to slice. The
> importance of temporal basis functions is that one can sample the
> interstimulus interval in a uniform and unbiased way with minimal loss
> of flexibility (by desynchronizing stimulus presentation and data
> acquisition).
>
> Clearly this argument becomes more potent at long TRs. Much of the
> published work using selective averaging has used short TRs to look at
> small brain volumes and should not be criticised along these lines.
>
> In short it is easy to do selective averging in SPM but we would never
> design an experiment where it could be used, so we have little
> experience with it.
>
> Very best wishes - Karl
>
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|