On Tue, 14 Feb 2012 16:10:47 +0100, Koutsouleris, Nikolaos <[log in to unmask]> wrote:
>Dear all,
>
>you could try out Christian Gaser's TFCE toolbox available at http://dbm.neuro.uni-jena.de/tfce
>it can be used to apply the TFCE approach to already estimated SPM experiments.
Please keep in mind that this is still a beta version, because I have not yet tested all possible designs and the whitening correction is not yet considered. However, it should work for most designs. I have also implemented multi-threading using OpenMP to save computation time, but this sometimes causes problems on windows machines and can be deselected.
You simply need an existing 2nd level analysis to run TFCE. It is maybe also worth to mention that the TFCE-statistic is somewhat robust against non-stationarity of smoothness, which is an often raised issue for VBM.
Here are the two key papers about TFCE:
http://dx.doi.org/10.1016/j.neuroimage.2010.09.088
http://dx.doi.org/10.1016/j.neuroimage.2008.03.061
Regards,
Christian
>Good luck!
>
>Nikos Koutsouleris
>NeuroImaging Lab
>Department of Psychiatry and Psychpotherapy
>Ludwig-Maximilian-University
>Nussbaumstr. 7
>80336 Munich
>
>
>________________________________
>Von: SPM (Statistical Parametric Mapping) [mailto:[log in to unmask]] Im Auftrag von Armand Mensen
>Gesendet: Dienstag, 14. Februar 2012 15:57
>An: [log in to unmask]
>Betreff: Re: [SPM] Using multiple cluster-defining thresholds in the same study
>
>Hello,
>
>Jonathan beat me to it but there is a good discussion on the use of multiple cluster forming thresholds in the Smith & Nichols (2009) paper (and further in PMID: 20426085 in terms of non-stationarity).
>
>I have been working with TFCE analysis for EEG datasets and would really recommend trying it out in all cases (but especially if you are concerned with the use of multiple thresholds).
>
>As mentioned it is implemented in FSL but I'm sure someone has made some basic scripts for Matlab/SPM by now (I have some basic working scripts which I adapted for EEG datasets in case no one else can help).
>
>Good luck,
>Armand
>
>
>On 13 February 2012 22:44, Bob Spunt <[log in to unmask]<mailto:[log in to unmask]>> wrote:
>SPM experts,
>
>I have been using SPM's cluster-level corrected statistics, and I am curious about using multiple cluster-defining (i.e., voxel-level, uncorrected) thresholds in the same study. To make this somewhat concrete, assume I have three conditions: A, B, C. In the first pass, I choose to use the common voxel-level (uncorrected) threshold of p<.001 to define clusters. In A>B, this reveals several clusters that survive correction. However, in A>C it reveals similar clusters but which in this case do not survive correction. Now, let's say that if I drop the voxel-level threshold to p<.01, the clusters emerging in A>C now survive correction at the cluster-level. What are the issues with this procedure?
>
>From my relatively naive point of view, the only major issue I see is that as you liberalize the cluster-defining threshold, the extent of the observed clusters will increase with a corresponding decrease in confidence in anatomical localization. (In the most absurd case, one can use a cluster-defining threshold of p<1 and will observe one massive cluster - the whole-brain - that survives correction.)
>
>If an investigator is completely transparent regarding their procedures and findings, that is, they fully report the cluster-defining thresholds used in each analysis and details regarding the anatomical extent of the resulting clusters, is there any issue with this procedure?
>
>Thanks in advance for any tips.
>
>Cheers,
>Bob
>
>-------------------------------------------------------------------------------
>Bob Spunt
>Postdoctoral Fellow
>Social Cognitive and Affective Neuroscience Labs
>Department of Psychology
>University of California, Los Angeles
>
>
>
|