Dear Charles-Henri,
There are many things that could lead to a lack of significance vs
signficance in other papers, including numbers of subjects and
experimental design, etc.
However, two aspects of segmentation might indeed help; firstly,
assuming that you are normalising different subject's spect images,
then you might find that unified segmenation's normalisation is more
precise than the classic normalisation to the SPECT template, and
improved registration should improve your stats.
Secondly, the corrected p-values account for the search volume; the
more voxels (or resels) that you analyse, the more severe the multiple
comparison correction, which means that analysing just the voxels
within a GM mask could help a lot.
So, the images inserted into the analysis are the same as currently,
but you would specify an "explicit mask" image, which limits the
analysis to just GM voxels. One way to create such a mask would be to
create an average of normalised GM segments, and threshold this in
some way to get a binary mask; I wrote a toolbox that can
automatically pick a good (in most cases!) threshold for binarisation
(and also help create the average)
http://www.fil.ion.ucl.ac.uk/spm/ext/#Masking
Hope that helps,
Ged
On 6 May 2010 14:36, Charles-Henri Malbert
<[log in to unmask]> wrote:
> Dear SPM expert,
>
> I am performing a paired t test analysis using SPECT images. Unfortunately, I get quite large p corrected values (i.e. 0.9 or so) while getting small uncorrected p values (0.001). As such it looks like a type I error. Estimation with bayesian makes things a little bit better but on a marginal basis.
> When looking at several papers with SPECT, authors get a corrected p value around 0.01 with images that are about the same as mine (it is SPECT after all). However, they performed grey mater segmentation. Did this segmentation improve so significantly the stats ? Furthermore, what are the actual images to insert in the analysis (grey matter segmented, white matter ???)
>
> Thanks in advance
>
> Charles-Henri
>
|