I meant to explain why I'm asking. If I run a low-level FEAT analysis on a 4D time series with cluster-based threshold set to 0 without a mask, the average time course of the contrast is based on ~91K voxels. However, if I add a small, pre-threshold mask, the resultant average time course is base on ~98K voxels. So, if the average time course was based solely on voxels within the mask, I would have expected only a small percentage of the ~91K to contribute to the time course. Instead, the number of voxels the average time course included INCREASED with the application of a mask. Any clarification on why this is happening would be greatly appreciated.
Matt
|