LLN, le 21/03/06
Dear Todd,
I cannot answer your specific questions but be aware that
concatenation is not recommended. If the scanner stopped and
restarted between runs, yet you modelled your conditions as a single
session, then your data no longer represent a continuous timeseries,
and several aspects of SPM will be disrupted, eg.:
- highpass filtering,
- temporal autocorrelation estimation,
- grand-mean scaling,
- session-meaning.
(I quote Rik Henson, from this list).
Note that I once compared "concatenated runs + constant" to
"separated sessions" and the results were quite similar, though not
identical.
Hope this helps,
Mauro
>Hi, all. I've got a question that I hope isn't too ignorant.
>
>Essentially, I'm wondering what SPM does with the raw EPI data
>*before* it runs the GLM. I believe this question can also be
>rephrased as, "What's the best way to get percent signal change out of
>a time series?"
>
>Short background: we're running a event-related experiment in which we
>collect 6 "blocks" of data. Acquisition stops between each block, but
>the subject stays in the scanner. We're interested in looking at this
>data as a single experiment, so we concatenate these blocks to form
>one single time series.
>
>If we want to extract a time series out of that concatenated
>information, we immediately see two potential problems:
>1) There are low-frequency fluctuations (relating to scanner drift,
>subject movement, etc) in the time series
>2) Voxels have differing baseline intensities, presumably due to
>proximity to ventricles, sinuses, etc.
>
>The first problem can be solved by running a high-pass filter (e.g.,
>spm_filter), but should this filter be run on each individual block
>before concatenating, or on the concatenated whole? Does it matter?
>
>The second problem can be solved by converting the voxel intensities
>to a percent signal change, but this is tricky. Do you assume that the
>differences in baseline intensities are meaningful, and thus simply
>divide by the mean SI for each voxel? If so, high intensity voxels
>will need a greater absolute signal change to achieve the same %
>signal change seen in low intensity voxels, which biases you against
>finding results in high intensity areas. Or, alternatively, do you
>assume that the baseline differences are unimportant, and normalize
>*all* voxels to the same baseline before determining %signal change?
>(In other words, do you expect the magnitude of the task-related
>response to be absolute, or relative to the baseline signal?) In
>either case, should the mean baseline be calculated at the local block
>level, or the global expt. level?
>
>And, finally, does it matter in which order you filter and scale?
>
>Obviously, changing these methods will give you different patterns of
>activation, but it's not entirely clear which way is correct. I
>suspect that many of you have given this much more thought than I
>have, and I'd really appreciate a few pointers. If this is already
>documented, please forgive the redundant question.
>
>Thanks!
>Todd
--
_____________________________________
Help fighting hunger: http://www.hungersite.com
Just click your mouse and sponsors of The Hunger Site donate a
serving of food to a person in need - at no cost to you.
______________________________________
Mauro PESENTI
Research Associate, National Fund for Scientific Research (Belgium)
Unite de Neurosciences Cognitives
Departement de Psychologie
Universite Catholique de Louvain
Place Cardinal Mercier, 10
B-1348 Louvain-la-Neuve
tel.: +32 (0)10 47 88 22
fax: +32 (0)10 47 37 74
E-mail: [log in to unmask]
http://www.nesc.ucl.ac.be
http://www.nesc.ucl.ac.be/mp/pesentiHomepage.htm
|