Dear Mayank,
Sometimes perhaps the simplest is to look at the code to see what is
exactly happening. The relevant code here is fairly short, especially if
you ignore the parts to make it work with images and meshes:
https://github.com/spm/spm12/blob/r7771/spm_fmri_spm_ui.m#L300-L354
SPM is first going to compute the 'globals', stored in SPM.xGX.rg: this
is a vector containing the mean signal for each volume (only considering
voxels whose values are greater than 12.5% of the mean of the entire
volume (i.e. discarding non-brain/background voxels)).
Then the values of each volume i are going to rescaled by 100/g(i)
(Global normalisation: None) or by 100/mean(g) (Global normalisation:
Scaling; i.e. "session-specific grand mean scaling"). The scaling
factors are stored in SPM.xGX.gSF. These options seem to correspond to
your "intensity normalization" and "temporal normalization". An old
discussion on this topic can be found in these slides from Tom:
https://www.fil.ion.ucl.ac.uk/spm/course/slides05-usa/pdf/Lab_D_GlobalScal.pdf
Best regards,
Guillaume.
On 04/03/2020 00:42, Mayank Jog wrote:
> Dear experts,
> I was trying to understand intensity normalization (and its potential
> pitfalls). I read in the SPM mailing list that if there are large-scale
> task-related effects in the brain, then dividing each timepoint by the
> average signal in the brain at that timepoint might null out the
> task-related effects.
>
> ^^ My understanding of this is, that this is spatial normalization ie.
> at each time-point, the in-brain data is averaged and used to normalize.
> [This is provided in SPM as an option in the "First level" analysis
> called Global-Normalization-scaling]
>
> Q1. Am I understanding this correctly?
>
> I was also curious about temporal normalization? This would involve
> dividing the time course for each voxel by the average temporal signal
> at the same voxel. This would be equivalent to (I think :) ) dividing
> the beta-weights by the beta-weight-corresponding to the intercept term
> in the model.
>
> Q2. Is this kind of normalization useful? For instance, when using the
> contrasts of the first level to calculate in a second level analysis,
> the average effect across subjects?
> Q3. Are there pitfalls associated with this sort of normalization?
>
> Thank you!
> Mayank
--
Guillaume Flandin, PhD
Wellcome Centre for Human Neuroimaging
UCL Queen Square Institute of Neurology
London WC1N 3BG
|