Print

Print


Hi Jesper,

excellent explanation - as always!

But I thinks Javier has only a group z-stat image or the like, no 4D.

Cheers-
Andreas
________________________________
Von: FSL - FMRIB's Software Library [[log in to unmask]] im Auftrag von Jesper Andersson [[log in to unmask]]
Gesendet: Freitag, 16. April 2010 19:48
An: [log in to unmask]
Betreff: Re: [FSL] Corrected P - Number of resels

Dear Javier,


I am trying to find the best way to apply a corrected P to my FEAT (fMRI) results, without being too conservative. I am working with a mask that contains about 1000 voxels. If I did here a Bonferroni correction, I'd have a corrected P of 0.05/1000. Since it has been extensively reported and explained in the different tutorials, this is too conservative for smoothed brain images, and the number of independent comparisons is considered equivalent to the number of Resels (rather than the number of voxels). In my case, the mask has about 9 Resels.

Would it be correct to use a P = (0.05/9) in my high level FEAT analyses? If so, would the right way to do it adding a prethreshold mask and setting an "uncorrected" P=0.0056 in the poststats tab?

you are right that doing a Bonferroni correction based on 1000 voxels would be much too conservative.

On the other hand, a correction based on 9 Resels would not be conservative enough in this instance. I would recommend going to

http://imaging.mrc-cbu.cam.ac.uk/imaging/PrinciplesRandomFields

which is an excellent explanation written by Matthew Brett. In particular you may want to look at

http://imaging.mrc-cbu.cam.ac.uk/imaging/PrinciplesRandomFields#head-504893e8afe62f1e3e8aaf3cb368a1d389261ef5

which compares a Resel-Bonferroni correction with "proper" RFT maths.

The Resel-Bonferroni approximation becomes especially lenient when applied to relatively small regions such as yours. The reason for this is that it is not only the volume of your search-area (of which the # of Resels is an index) that matters, but also the area. For a given p-value and a given volume the threshold should get higher the larger the surface area.

An intuitive explanation for that is as follows. Imagine a smooth gaussian random field (under the null hypothesis) that is all around you. Some values will be small, some will be large and on average they will be zero. Now imagine that performing an experiment (under the null hypothesis) is like sampling a volume somewhere in this field. The larger that volume is, the greater the chance/risk of finding a relatively high value by chance only. And that is of course the reason why you have to use a higher threshold when you have a larger volume (cf Bonferroni).

Imagine now that your search volume is in the shape of a small lump of (Pizza) dough. You can put that in your random field see how large values you would observe by chance. Since it is a small lump, chances are quite good we won't happen upon any large numbers, and hence we can get away with a reasonably low threshold for a given p-value. Let us now say we start flattening this piece of dough until it looks like a family size Pizza, and that we start sampling the Random field with this. There is now a much greater risk/chance that we happen to cut through an area with large values, and therefore we have to use a higher threshold to protect against these false positives.

I would therefore recommend using randomise http://www.fmrib.ox.ac.uk/fsl/randomise/index.html instead. If you supply it a mask it will consider only those voxels that are a part of your prior hypothesis, and hence base the correction only on those. This mask can consist of a set of disjunct regions that together define the areas where you expect to see something for a given contrast.

Good luck Jesper