> > What would you suggest as the convolution kernel and the FWHM for an epi
> > data set with the following parameters: 64X64, xyz 3.75 3.75 7.50
> > which includes a 5mm gap in the z direction, TR 2.5, Task, Rest, Task,
> > Rest 10 scans each, total 40 images. The task consists of showing
> > stimuli (5 pictures shown at 3s each interspersed with 2s rest
> > intervals). 1.5T magnet
>
> I would suggest 6x6x12mm. You can explore a range of smoothing kernels
> to identify the most appropriate balance between sensitivity and
> spatial acuity in the sort of data you acquire.
>
> With best wishes - Karl
I would like to point out the risk of fishing here: Searching over a
range of smoothing kernels introduces a new multiple comparisons
problem. If you produce multiple sets of analyses, each with
different smoothings, and then pick out the one analysis you like, the
significances (p-values) the SPM reports will be inflated (p-values
lower) relative to what they really are.
There have been papers that address this problem (see below) but I
don't know of any implimentations.
As a pragmatic alternative I would suggest the following: Use one or
more datasets as 'calibration', exploring a range of fliter sizes to
find a good trade off, as Karl says, between sensitivity and spatial
acuity. Choose one filter size, set the calibration data aside, and
then apply that filter to all other similar data (same voxel sizes,
TR, TE, etc).
This requires sacrificing at least one representitive data set, but you
avoid the fishing problem.
Hope this helps.
-Tom
Worsley, K.J., Marrett, S., Neelin, P., and Evans,
A.C. (1996). Searching scale space for activation in PET images. Human
Brain Mapping, 4:74-90.
Poline, J.-B., and Mazoyer, B. J. 1994c. Enhanced detection in brain
activation maps using a multifiltering approach. J. Cereb. Blood Flow
Metab. 14: 639-642.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|