Hi Ged,
Thank you for your reply. I am convinced that it is not so good an idea
to threshold first level contrast images. But I need to avoid errors
with smoothness estimation (unusually small FWHM) by some other method.
When I perform a one-sample t-test on a certain set
of con images without absolute threshold masking, I get
FWHM: [0.0000 0.1336 0.0000] (pixels).
There are no NaNs in the first level contrasts. Implicit mask is
specified at the second level, but I don't think it makes a difference for
contrast images that have float precision. So, if I understand
correctly, the net effect is no masking at all (except for voxels that
have constant values across subjects).
Then I thought I should try specifying an explicit mask in the form of a
mask image that had zeros for out-of-brain voxels. It seems to work, as
the smoothness estimate is now more reasonable:
FWHM: [4.2832 4.7114 4.2297] (pixels).
I think this is a better solution than thresholding, and plan to go on
with this method of analysis. Please let me know if there is anything
wrong with this approach.
Best wishes,
Kosuke Itoh
On Fri, 9 Feb 2007 18:46:54 +0000
Ged Ridgway <[log in to unmask]> wrote:
> Hi Kosuke,
>
> I can't give an authoritative answer, but I think thresholding/masking
> first level con images for a second level analysis should be
> unnecessary, and may be a bad idea.
>
> Say I was looking at a two sample t-test of some con images, and at a
> particular region, sample B were all large and positive, while sample
> A where either all large and negative, or all very close to zero. In
> either case, I would want B>A to return significance, not for A to be
> masked out of the analysis.
>
> > An appropriate mask can also be obtained by setting absolute threshold
> > to "none," as it sets xM.TH to -Inf. However, data analyzed in this way
> > (sometimes but not always) causes problems with smoothness estimation.
> > I am not sure why, but it may be due to zero-valued pixels that survive.
>
> This sounds to me like a problem that should be investigated and
> solved itself, rather than worked around by thresholding contrasts.
> What kind of problems with smoothness estimation? (e.g. errors,
> warnings, or unusually rough/smooth results, or something else?). Are
> there NaNs in the first level contrasts, or do you have implicit
> zero->NaN thresholding at the second level?
>
> Best,
> Ged.
|