The first part of spatially transforming images via spatial
normalisation is an affine transform. These affine transforms
can easily be combined with the rigid body transformations
estimated at the realignment stage. This is in fact what is done
in SPM96 and SPM99.
Realignment only does rigid body registration, which is
parameterised by 3 rotations and 3 translations. For a number of
reasons, this does not remove all variance in the data that can
be explained by movement. The main sources of residual variance
that I can think of are:
Interpolation error from the resampling algorithm used to
transform the images can be one of the main sources of motion
related artifacts. When the image series is resampled, it is
important to use a very accurate interpolation method such as
sinc or Fourier interpolation.
When MR images are reconstructed, the final images are usually
the modulus of the initially complex data, resulting in any
voxels that should be negative being rendered positive. This has
implications when the images are resampled, because it leads to
errors at the edge of the brain that can not be corrected however
good the interpolation method is. Possible ways to circumvent
this problem are to work with complex data, or possibly to apply
a low pass filter to the complex data before taking the modulus.
The sensitivity (slice selection) profile of each slice also
plays a role in introducing artifacts.
fMRI images are spatially distorted, and the amount of distortion
depends partly upon the position of the subject's head within the
magnetic field. Relatively large subject movements result in the
brain images changing shape, and these shape changes can not be
corrected by a rigid body transformation.
Each fMRI volume of a series is currently acquired a plane at a
time over a period of a few seconds. Subject movement between
acquiring the first and last plane of any volume leads to another
reason why the images may not strictly obey the rules of rigid
After a slice is magnetised, the excited tissue takes time to
recover to its original state, and the amount of recovery that
has taken place will influence the intensity of the tissue in the
image. Out of plane movement will result in a slightly different
part of the brain being excited during each repeat. This means
that the spin excitation will vary in a way that is related to
head motion, and so leads to more movement related artifacts.
Ghost artifacts in the images do not obey the same rigid body
rules as the head, so a rigid rotation to align the head will not
mean that the ghosts are aligned.
The accuracy of the estimated registration parameters is normally
in the region of tens of micro-m. This is dependent upon many
factors, including the effects just mentioned. Even the signal
changes elicited by the experiment can have a slight effect on
the estimated parameters.
These problems can not be corrected by simple image realignment,
and so may be sources of possible stimulus correlated motion
artifacts. Systematic movement artifacts resulting in a signal
change of only one or two percent can lead to highly significant
false positives over an experiment with many scans. This is
especially important for experiments where some conditions may
cause slight head movements (such as motor tasks, or speech),
because these movements are likely to be highly correlated with
the experimental design. In cases like this, it is difficult to
separate true activations from stimulus correlated motion
artifacts. Providing there are enough images in the series and
the movements are small, some of these artifacts can be removed
by using an ANCOVA model to remove any signal that is correlated
with functions of the movement parameters. However, when the
estimates of the movement parameters are related to the the
experimental design, it is likely that much of the true fMRI
signal will also be lost. These are still unresolved problems.
| When analyzing fMRI data, I normally realign the functional
| images using the 'coregister only' option. According to my
| understanding, this procedure writes a 6-parameter affine transformation
| into the .mat file associated with each functional image. The
| affine transformation describes how to translate and rotate any given
| functional image such that it is realigned with the reference image.
| After realignment, I usually normalize the functional images to MNI
| space. From my reading of the SPM documentation, it seems to be the case
| that the spatial normalization module looks at the .mat file associated
| with each functional image. Further, the online help states that it is
| possible to normalize the functional images without having resliced them
| first. When normalizing functional images that have been realigned with
| the 'coregister only' option, does the normalization module use the .mat
| file associated with each functional image to create normalized images that
| are realigned with one another, in addition to being warped to MRI space?
| The reason I ask is that we recently found that including motion
| parameters as regressors during model estimation gets rid of a lot of
| spurious-looking activations (e.g., activations around the edge of the
| brain). This is certainly a good result. But, I'm wondering why
| entering motion parameters (produced for each session during realignment)
| as regressors during model estimation should be so helpful if, in fact, the
| functional images were already realigned with each other during
| normalization. That is, if the motion has already been corrected, how
| could the motion parameters account for much variance during model
| estimation? Any advice would be greatly appreciated!