Print

Print


Hello

I have a theoretical question about the logic of the Featquery ROI analysis.

As I understand it, the way Featquery currently runs, when used with a mask, is the following:

(1) Take the output stats from the univariate 1st level FEAT GLM analysis
(2) For each voxel in the mask extract the desired user statistic (e.g. COPE, % signal change, z-score etc)
(3) Apply some user specifed measure for aggregation of the spatial statistics (e.g. mean, median, max, 90th %tile
 of the above)

Would it not make more sense to do:

(1) Apply the ROI mask to the 4d timeseries of the voxels in the ROI
(2) Apply user specified measure for aggregation across space to generate a single timeseries
(3) Run FEAT on this single timeseries to produce desired user statistic (e.g. COPE, % signal change, z-score etc)

One advantage of this reorder would be that the choice of spatial aggregation statistics in the new step 2 would seem to be far
more robust and less arbitrary than in the current way Featquery works. It probably only makes sense to use the mean,
median or first principal component from a PCA (and these presumably would be rather similar) for spatial aggregation of the
timeseries in step 2. Conversely, in the current Featquery a wider range of different choices for spatial stats can be argued as
justifiable by the user (say, mean vs max), and these can produce markedly different results.

many thanks

Daniel

--
Daniel Irlam Ph.D
[log in to unmask]