Hi there,
I am puzzled by this and am hoping someone here might understand it:
When I do single-session Melodic ICA on unsmoothed 20-min datasets (previously motion corrected, slice time corrected, BET brain extracted, and detrended), I get <<100 components with the automatic dimensionality estimation. However, if I smooth the data, either before Melodic or as part of the prestats options in the GUI, I get 400-600 components. I have done the smoothing with both FSL and Afni, and get the same result. I'm also noticing that Melodic runs much faster on unsmoothed data than on smoothed data.
Can anyone see a logical reason for these observations?
Thanks!
Kajsa
Princeton University
|