Print

Print


Hi all,

I have quite a few of questions regarding slice timing and stimulus paradigm
file timing in sparse sampling (ie clustered volume acquisition). I was able
to figure out the basics from the FSL archives but I did not fully
understand the details. I am new to to FEAT and thus (?) some of the
questions are rather simple and/or their logic might fall, but I'd rather
ask a stupid question than reach a wrong conclusion by myself. Sorry for any
inconvenience this might cause :) I hope the answers will benefit all of us
who use sparse sampling designs.



*** Background ***

In a fMRI auditory experiment, to avoid the confounding effect of the
acoustical scanner noise, we collect just one full volume EPI (24 slices)
every 10 seconds. It takes 1.2 seconds to collect all the 24 slices, which
is then followed by a pause of 8.8 seconds during which no EPI data is
collected. A single auditory stimulus (duration 0.3 seconds) is presented 4
seconds before each EPI onset. Thus, there is just 1 stimulus for each EPI.
The order of collecting the EPI slices is from bottom of the stack (basal)
towards the top of the head (1-24). We use a 3T Siemens Trio with a birdcage
head coil.

For describing the stimulus timings, we use the 3-column paradigm file
format for "Basic shape". For several reasons we would rather stick to the
3-column format rather than the 1-column format. We have 9 different
stimulus categories (plus REST) presented in a random order, thus we have
the corresponding 9 paradigm files and define REST implicitly (ie REST
epochs are where none of the 9 paradigm files suggests there would have been
a stimulus). Thus, the sequence of events (in real time) is something like this:

EPI0 (0 sec) --- Stim1 (6 sec) --- EPI1 (10 sec) --- Stim2 (16 sec) --- EPI2
(20 sec) ...

Despite the short stimulus duration, the above paradigm seems to work
extremely well - the data we have collected so far have shown very strong
activations in auditory areas. Thus all of the questions below mainly serve
to tune the analysis correctly to get the best out of the data, and to
reduce artifacts that arise from suboptimal analysis settings. Hopefully
understanding better what happens at TR=ISI=10 will also allow us to improve
our paradigms towards a more rapid even-related design.

**********
QUESTIONS:
**********

*** Filtering and prewhitening ***

For the above values TR = 10 sec / EPI length = 1.2 sec / stimulus duration
= 0.3 sec, what values would you recommed for (i) Data/High pass filter
cutoff [sec], (ii) Pre-sats High pass filter [ON/OFF], and (iii) Stats/Use
FILM prewhitening [ON/OFF]?




*** Slice timing and the correct use of slice timing correction in different
Convolution models (None or Gamma) ***

If I understood previous comments correctly, then without slice-timing
correction, FEAT would assume that the 24 slices (that form one full volume)
were taken with timing of each slice spread out evenly during the 10 seconds
time period. Here I assume that time 0 = onset of first slice of first full
volume EPI. Thus, as far as FEAT is concerned, slice 1 = 0 seconds, slice 2
= 0.42 seconds, slice 3 = 0.83 seconds , ..., slice 23 = 9.17 seconds, and
slice 24  = 9.58 seconds. Thus for the top slice (number 24) there is a
discrepancy of 9.58 (FEAT value) - 1.15 (reality) = 8.43 seconds. Ok so far?

Therefore, any Convolution model that would try to take into account the
timing between the stimulus and EPI (gamma function etc) would go
increasingly wrong towards the top slices. Is this correct? If I would like
to use a gamma function, would the "Add temporal derivative" (alone, without
slice timing correction) be expected to correct for such a large timing
discrepancy correctly?

Then, to analyze these kind of data without slice-timing-correction, I guess
I should only use Convolution models that do not take the timing between the
stimulus and EPI into account at all (is Convolution=None my only sensible
possibility) ? In this case, I guess  I should also turn off the "Add
temporal derivative"?

A second point of confusion was what happens when I apply slice-timing
correction in the above situation. If I understood correctly, with
slice-timing correction, FEAT assumes that all 24 slices were taken
instantenously at a time point halfway through the acquisition. However it
was not entirely clear to me whether this halfway, in this example, would be
at 0.6 seconds (1.2 seconds/2) or at 5.0 seconds (10.0 seconds/2). I guess 5
seconds is correct?

A comment (I hope I understood this correctly): In using slice-timing
correction, I attempt to fix the slice timing in a way that it would seem
that all 24 slices were taken instantenously at midway of the EPI; thus the
first and last slices would still be 0.6 seconds off the _actual_ slice
collection time, but in the current application this is accurate enough.
(Steve told me that there is also a way to get the slice timings corrected
to the _really_ exact values, but for now this is not necessary).




*** 3-column paradigm file timing corrections when using slice timing
correction ***

If the answer to the previous question is 5 seconds, then if I do apply
slice-timing correction, I should adjust my stimulus timings that are listed
in the 3-column paradigm files accordingly. If I understood Steve correctly,
then if in REAL time (0=onset of first EPI) the first two lines of my
3-column paradigm file (for eg stimulus category 1) would be, e.g.,

6 0.3 1 ; in reality, first stimulus occurs at 6 seconds after onset of EPI0
( = 4 seconds before EPI1)
16 0.3 0 ; second stimulus occurs at 16 seconds after onset of EPI0 ( = 4
seconds before EPI2), but it is not a category 1 event
...

then to correct for the FEAT assumption that the first EPI started at 5
seconds (real time) = 0 seconds (FEAT time), I should shift the paradigm
file by 5 seconds, and thus list instead

1 0.3 1 ; first stimulus is listed to start at 1 seconds (6 seconds real
time minus 5 seconds FEAT time = 1 second), and FEAT thinks that EPI1 occurs
at 4.0 seconds
11 0.3 0 ; etc.
...

Would this be correct?




*** How are the above adjustments correctly taken into account when setting
gamma function values? ***


If i would be analyzing these sparse sampling data with a gamma function,
using both (i) the slice timing correction and (ii) the correction for the
timing in the 3-column paradigm file, then should one adjust the "mean lag"
value of the gamma function to reflect the time corrections? For example, if
I would expect that the peak of the HDR to the auditory stimulus is reached
at 4.0 seconds after the onset of the stimulus, should I set the gamma
function mean lag = 4.0 seconds (time from stimulus onset to expected HDR
peak), or should I add 5 seconds to this value  to compensate for the -5
seconds time correction in the paradigm file, and thus set mean lag = 9.0
seconds?

If I use slice timing correction in a sparse sampling experiment and analyze
that by using a gamma function, would it be equally good practice to set the
"phase" of the gamma function to -5 seconds (or +5 s???) as compared with
shifting the latencies in the 3-column paradigm files by -5 seconds? Would
the method of subtraction (from paradigm file or from gamma phase value)
affect the correct choice of mean lag value?




*** Analysis of these data with Convolution = None, and 1-column paradigm
file format ***

I would expect the above slice timing correction and 3-column format
paradigm file timing corrections would not be needed at all for Convolution
model = None, but only become meaningful if one would like to use a model
that does take the timing between the stimulus and EPI into account (e.g., a
gamma function)?

When I have tried to analyze these data with a 1-column paradigm file format
(ie no accurate stimulus timings are listed), and used Convolution = None
(ie stimulus-to-EPI latency should not be modelled?), I would expect that
the Slice timing correction and Add temporal derivatives should not affect
the result at all. However, in reality, both Slice timing correction and Add
temporal derivatives, apparently independently of each other, affect the
analysis of the result somewhat. Slice timing correction had a larger effect
in the top slices (removed some activations that appeared to be artefacts),
which in some way makes sense to me because this is where the timing
discrepancy is largest - however the finding that these parameters, in this
specific 1-column paradign format analysis, without Convolution, had ANY
effect, was unexpected. Any suggestions as to why this happens? Or is my
reasoning that these should not have any effect unfounded?

****************

Phew. Your advice will be highly appreciated! Thanks in advance,

Tommi

--
Tommi Raij, M.D., Ph.D.
Research Fellow
MGH/MIT/HMS Athinoula A. Martinos Center for Biomedical Imaging
Building 149, 13th Street, Mailcode 149-2301
Charlestown, MA 02129 U.S.A.

[log in to unmask]
FAX 1-617-726-7422