> The conclusion I draw from these is that it is not sound to individualized
> explicit grey matter masks in the FFX- analyses as it will cause problems
> with regards to GFT at the second level.
I think what is more relevant than the type of masking applied is the
shape of the masks...for example, as mentioned in the posts you linked
to, "smoother" shapes (like a sphere) are probably better for random
field theory than less-smooth shapes (like a brain with the ventricles
not masked). Note that the 'built-in' SPM brain mask includes the
ventricles and is thus probably an appropriate shape.
> From my, admittedly limited, reading it seems that the "gold" standard is to
> apply a smoothed and thresholded mask based on on the MNI avg152T1 template,
> and apply this on in the RFX- analysis.
I don't think there is a particular standard. I don't see a problem
with the approach you mention (which is essentially equivalent to
using the built-in brainmask.nii file, assuming ventricles are
included to make it more spherical). One important thing to note
though is that the proportional 1st-level masking that is often used
(by default). Even without an explicit brain (or GM) mask, this
proportional masking generally does a good job at masking out-of-brain
voxels at the first level. At the second level, any regions that have
been masked in any of the 1st-level analyses will also be masked.
Thus, an analysis that only uses implicit masking and one that uses
some sort of explicit brain/GM mask may end up with very similar masks
in the end. (Of course it's always good to verify what the mask.img
looks like at both levels to make sure it's sensible...)
> I have some questions with regards to this approach, however. Wouldn´t this
> procedure mess up the analysis with regards to GFT just as much as applying
> a mask on the first level?
Yes...what's important is the shape of the final mask produced at the
2nd level, regardless of how you get there (combination of 1st-level
masking and/or whole brain/GM mask).
> If this isn´t the case, I can´t help but wonder whether a more sound
> approach would be to construct the mask as a mean of the c1 images that the
> Unified segmentation procedure produces. Or is the difference between these
> two masks post-normalization so small as to be negligible?
In my (limited) experience using the mean GM masks produced through
segmentation, smoothing, and thresholding, is very similar to the
included brainmask. The main difference is that using the c1* images
will leave some of the ventricles unmasked, which as pointed out in
the posts you link to, may adversely affect the random field theory
application...so if you use an explicit mask, I would just use the
> Third, provided that it makes sense to use gray matter masks at all, should
> the smoothing kernel applied on the mask be identical to the kernel used on
> the functional images, or does GFT provide for some criterion for the
> smoothness of the mask?
If you create your own mask, I think it makes sense to use the same
smoothing kernel. This has nothing to do with random field theory
though....the explicit mask is treated as a binary image (voxels > 0
included in the analysis). Smoothing for the mask is just to make
reasonably sure that you are not masking out effects in your
functional data (which, if smoothed, might be larger than unsmoothed
> Finally, does the use of a gray matter mask allow one to circumvent FWE
> correction to some degree, or at least allow one to use a more lenient
> threshold, as it reduces the number of comparisons made?
FWE will take into account the number of voxels you are correcting
over. So, all other things being equal, an FWE threshold of .05 over
more voxels will require a higher t statistic than an FWE threshold of
.05 over fewer voxels. Thus, masking is not really justification for
using a more lenient threshold...the benefit, if any, comes from
reducing the number of voxels you are looking at (assuming you haven't
done strange things to the estimated smoothness, as addressed in the
posts you linked to).
Hope this helps,