Dear Priya,
The easiest option is to segment the anatomical T1 volume or the mean EPI (if the spatial resolution of your EPI data is good enough) and use the corresponding flow fields for normalisation. During model specification you could then enter an explicit mask based on the (binarized) individual wc1* files. Very likely you should binarize the smoothed version of the wc1* file, as your data is smoothed as well. Check out whether it is necessary to reslice the mask, as the wc1 files usually differ from the normalised EPI files with regard to resolution. The threshold for binarisation is more or less up to you, for VBM one takes into account voxels of the smoothed w(m)c1* files with a GM volume/density of at least 0.1 or 0.2, this might be reasonable for your purpose as well. Note that any threshold is somewhat arbitrary, and that it can become problematic when dealing with e.g. patients with large atrophy. See Strigel et al. (2005, AJNR), commented by Parrisha (2006, AJNR), and the paper by Ridgway et al. (2009, Neuroimage) on that issue. Also note that you might encounter low signal in some of the EPI voxels although they are associated with a high GM volume/density according to the T1 image, these voxels are usually discarded based on the "Masking threshold", which might have to be adjusted in your case.
Alternatively, you could apply the masking already during preprocessing, e.g. mask the realigned and resliced EPI images rf* or raf* with the binarized c1* files, or the normalised and realigned EPI images wf* or waf* with the binarized w(m)c1* files.
Personally, I prefer unmasked data. Often data suffers from massive artefacts/preprocessing errors, resulting in lots of white matter or CSF activation. In case you masked these data sets it would look as if you had "nice" GM activations.
Hope this helps a little
Helmut
|