Print

Print


Marian,

The general problem of analyzing fMRI data from a small group of subjects
with a lot of movement may not be yet be solved, but I would suggest the
following.

The ArtRepair toolbox has a slice repair feature that is mostly designed
for weird transient scanner noise, and your data probably doesn't need it.
(By the way, there is very little redundancy between the slice and volume repairs,
and if one type of repair fixes a problem, the second does not do anything
to the fixed data.)

Large movement subjects often exhibit both large amplitude 
movements and rapid movements, so the goal is to fix both types of problems.
The Volterra motion regressors will catch spin history artifacts, so I
believe they are better than six motion regressors. But rapid motions
may also cause image distortions, which is a problem different from
spin history, so the analysis needs to do something with the rapid motion
scans. One approach is the one by Lemieux that adds "null regressors" near
the times of jerky head motions. Alternatively, the ArtRepair volume repair
function replaces error prone data by interpolating through volumes where there
was high scan-to-scan motion. It's designed to be automatic, to simplify 
running large numbers of subjects. So, I would propose that the ArtRepair 
volume repair plus the Volterra motion regressors would be a good analysis 
method. OR, follow the procedure in the Lemieux paper and add 
null regressors as needed to the design matrices.

But even with the "best" method, how does one know if the result is correct? This
problem is tricky, because the GLM could give false activations from task-
correlated motion, consequently, higher activations are not necessarily better.
One suggestion is to quality check the estimates that
come out of the single subject analyses. (The estimates are the con images,
not the activation spmT images). ArtRepair version3 (just released) has
new tools to perform a quality check on those estimates. If the estimates
are unusual, then the single subject analysis may not have been successful.
The software will also suggest outlier subjects to be excluded from a group
analysis.  (http://cibsr.stanford.edu/tools/ArtRepair/ArtRepair.htm)

One controversial point is whether the interpolation by ArtRepair will
compromise the single subject activation map. For group analyses, only the
estimates are passed up to the group level, so it doesn't matter. For
single subject analyses, the toolbox includes a deweight function that essentially
removes the repaired scans from the GLM estimation, and SPM will correspondingly
reduce the number of degrees of freedom.

Good luck,
  Paul


 
----- Original Message -----
From: "Marian Michielsen" <[log in to unmask]>
To: [log in to unmask]
Sent: Wednesday, April 1, 2009 12:54:25 PM GMT -08:00 US/Canada Pacific
Subject: [SPM] modelling out task related movement

Dear SPMers,

I have a data set in which quite a few subjects show task correlated head
movements. As people of the list have commented earlier, the most valid
option would be to just throw away those subjects, but if I do that I have
very little data left. I am now looking at different options to deal with
this problem. Just adding the realignment regressors into the model seems
too conservative; if I do that, in some sessions I have almost no activation
left. Because of that, I tried some other approaches, which as far as my
understanding goes (from reading other posts in this list and from looking
at my own data), range from very conservative to very unconservative:

- Modelling with realignment regressors -> this leaves me with almost no
activation

- Modelling the volterra expansion of the realignment regressors (as
described for instance in Lemieux, 2007 ) -> this seems to work out slightly
better then using just the six primary realignment regressors, with this
approach my contrast maps show a bit more activation
- Unwarping the data instead of modelling the realignment regressors -> this
results in quite a lot of activation, in task related areas but as it seems
also quite a lot of noise
- No unwarping and neither modelling the realignment regressors -> results
in most activation, and will probably generate a lot of false positives

As for those four options one of the middle two is probably most valid.
However, I also just starting looking a bit into the ArtRepair toolbox (by
Paul Mazaika). It wonder if it makes sense to use this toolbox in
combination with one of the former mentioned approaches. With this toolbox,
it is possible to detect and repair artifacts both at the slice and at the
volume level. The first would be done before any preprocessing steps, the
second just before estimating the model. Does anybody know if it makes sense
to repair artifacts at both those level in one session (i.e. both within and
between volumes), or is that redundant? If you would be very precise, you
could for instance opt for the following approach:

1. use artrepair to repair bad slices
2. realign
3. unwarp
4. coregister
5. normalize
6. create first level model (without realignment regressors)
7. use artrepair to repair bad volumes
8. estimate results from the repaired data

Or use the same steps but choose to include the realignment regressors
instead of unwarping.

However, maybe some of those steps make some of the other steps redundant.
Has anyone any thoughts on this? Or maybe tried out other approaches? Any
help would be very much appreciated!

Kind regards,

Marian

-- 
Paul K. Mazaika, PhD.
Center for Interdisciplinary Brain Sciences Research
Stanford University School of Medicine
Office:  (650)724-6646             Cell:  (650)799-8319

CONFIDENTIALITY NOTICE: Information contained in this message and any
attachments is intended only for the addressee(s). If you believe
that you have received this message in error, please notify the
sender immediately by return electronic mail, and please delete it
without further review, disclosure, or copying.