Print

Print


Jess,

I believe that the non-zero voxel issue has to do with the fact that for the
combined runs, the images have been put into a standard space. Therefore,
you should be getting the same # of voxels per subject per area when you are
looking at the non-thresholded z-stat image. Then again it might be solely
due to the fact you are just analyzing the z-stat image. In other words, you
are taking a mask from the atlas, which has a specific number of voxels, and
overlaying that on a portion of the image which has non-zero voxels. So
every time you do this you will get the same number of voxels after masking. 

You get values for mean % signal change on the individual runs because there
are many timepoints (TRs) for each voxel; hence, it can calculate this. The
results for the averaged data has done just that, it has averaged the
results for each run leaving you with only 2 "time-points." 

Hope this helps,

Carlos 

>I have been carrying out some featquery analyses on some data. I have 
>noticed that when I run featquery on data averaged over two runs of a task 
>(each participants gfeat) the number of non-zero voxels in the output report 
>seems to be exactly the same for all participants for the PE, COPE, Tstat and 
>Zstat. Very rarely there is a participant where the number of non-zero voxels 
>differs from all the rest of the participants. It only differs between the 
>participants for the thresholded z stat.  However when I run featquery on 
>each of the individual runs of the task not only do I get values for mean % 
>signal change that do not average to give the means I get when I run the 
>analyses on the gfeats but the number of non-zero voxels differs quite a lot 
>between participants for the PE, COPE, Tstat and Zstat. 
>
>I wondered if anyone could explain why this is and if I am doing something 
>wrong. 
>
>Many many thanks
>
>Jess