Markus -

First, you might want to check this archived message:

where Donald McLaren identified a problem with using the precise 
sigmoidal equation in Henson et al (2002) for data analysed in SPM5+, 
because the scaling in spm_get_bf.m changed (that paper was based on SPM2).

To answer your specific questions:

> I am refering to the paper of Henson et al 2002 NeuroImage 15, 83-95,
> about the Latency differences.
> I would like to use this technique.
> I am right when I proceed as follows?
> a) integrate temporal derivatives into my first level model
Yes. Note however that you can only separate estimation of latency from 
estimation of height of an HRF if you have long or jittered SOAs (eg 
null events); with rapid, fixed-SOA event-related designs (eg SOA<~2s, 
randomised event-types), the temporal derivative for one event-type will 
be correlated with the difference in (ie contrast of) canonical HRFs 
across event-types. This is just an example of the more general point 
that you need to estimate each regressor (temporal basis function) with 
high statistical efficiency - ie the distinction between estimating an 
HRF *shape* and simply detecting the *amplitude* of an assumed shape 
(e.g, relative efficiencies of a canonical HRF vs an FIR basis set; see 
Henson, 2004, HBF book chapter).

> b) calculate the beta for the hrf_<each condition> as a [1 0]
> c) calculate the beta for the td_<each condition> as a [0 1]
> d) use the image calculator to create the latency_image_<each condition>
> using the formula
> 2C/(1+exp(D beta2/beta1)) - C;
> where C=1.78, D=3.1 (Henson et al 2002, Neuroimage 15, p86).
Yes to points b-d, except that you might need to restimate the 
parameters C and D in you are using SPM5+, as Donald found.

Note that these parameters only matter if you want to estimate the 
precise latency (eg in seconds), which is only really valid in the 
linear regime where the Taylor approximation holds (ie +/-1s of the 
canonical latency). Furthermore, precise latency differences in the BOLD 
impulse response may not be easily interpretable, because they do not 
necessarily reflect latency differences in the underlying neural 
activity (which is what I assume you are really interested in) - given 
the time integration (see Discussion in Henson et al, 2002) and that the 
neural-BOLD coupling is likely to have appreciable nonlinearities. (This 
is perhaps one reason that the various published methods for estimating 
BOLD latencies have not been used extensively for neuroscientific 

If you don't care about precise latency, then you can view the sigmoidal 
function just as a statistical transform that prevents the 
derivative:canonical ratio from exploding beyond the linear regime (or 
when the canonical estimate is close to zero - ie for voxels where there 
is no basic impulse response in the first place). Then the precise 
parameters don't matter: you are just conditioning the data so that it 
becomes more Gaussian (the ratio won't be precisely Gaussian, even after 
transformation (you could use a log transform for that), though with 
enough Gaussian smoothing, the parametric stats should be reasonably 
robust). It also helps to only analyse voxels where there is a 
significant loading on the canonical HRF as well (ie use an inclusive 
mask, as in Henson et al, 2002), where the ratio only really makes sense 
(as mentioned above).

> e) enter these latency_images into the second level stats (ANOVA).

> f) And the last question: Is the above formula correctly entered into
> the ImageCalculator when doing:
> f =  '(2*1.78./(1+exp(3.1*i2./i1))) - 1.78' ( I mean, I do get some
> imges, but are they correct??)
Should be. I can send you a function that writes latency images offline 
if you want. But only if you are sure you want to proceed with latency 
analyses.... ;-)



                 Dr Richard Henson
         MRC Cognition & Brain Sciences Unit
                 15 Chaucer Road
                  CB2 7EF, UK

           Office: +44 (0)1223 355 294 x522
              Mob: +44 (0)794 1377 345
              Fax: +44 (0)1223 359 062