Print

Print


Dear Donald,

Currently I have only implemented two-factor designs (one for mixed and another where both are between). I'll be working on more later but it can get quite tricky to write an algorithm which not only works but works fast enough to allow for a large number of permutations.

At the moment my scripts for permutation of complex designs are based on the attached article by Anderson and te Braak 2003. There are many strategies that can be used such as simple permutation of the raw data; permutation of the residual terms (under reduced and full models); as well as permutation of only one factor (and thus providing exact control of the other terms in the model.

I find the paper pretty good; and has simulations to test each strategies performance.

I'm far from releasing the scripts as stable but I have analysed multiple datasets using TFCE and permutation for complex designs and the results are always very impressive (high specificity and sensitivity).

Kind regards,
Armand


On 15 February 2012 16:57, MCLAREN, Donald <[log in to unmask]> wrote:


On Wed, Feb 15, 2012 at 4:33 AM, Armand Mensen <[log in to unmask]> wrote:

Dear Donald,

The method requires no thresholds of any kind and works directly on the input data by taking information from a large number of thresholds (say 50), from 0 to the maximum statistic in the image. The information from each threshold is then optimally weighed using default (shouldn't be changed) parameters.


That makes sense, I misinterpreted the figure in the paper.

 

It should work hassle free for single factor designs of any kind since the permutation strategy is clear. However more complex designs require some more tinkering because its not always clear how to go about exchanging the labels here.

I have some working scripts though for two factor designs that test interaction and main effects simultaneously by permuting anova results.

When  you mention two factors, do you mean two between-subject factors? If you are using a between-subject and within-subject factor, how are you creating and using the two error terms? Also, have you implemented any variance correction for repeated-measure designs.

 

Hope this clarifies some things.

Armand

On Feb 14, 2012 4:55 PM, "MCLAREN, Donald" <[log in to unmask]> wrote:
Thanks for the link.

However, it seems that this only works for between-subject designs and not within-subject designs (or perhaps only works for within-subject designs with a single factor) as it requires permutations.

Is there a way to get it to work for repeated-measures designs?

Also, while it is labelled threshold-free, it still seems that it requires a voxel-wise threshold. Is this the case, or am I miss interpreting the method?

Best Regards, Donald McLaren
=================
D.G. McLaren, Ph.D.
Postdoctoral Research Fellow, GRECC, Bedford VA
Research Fellow, Department of Neurology, Massachusetts General Hospital and
Harvard Medical School
Office: (773) 406-2464
=====================
This e-mail contains CONFIDENTIAL INFORMATION which may contain PROTECTED
HEALTHCARE INFORMATION and may also be LEGALLY PRIVILEGED and which is
intended only for the use of the individual or entity named above. If the
reader of the e-mail is not the intended recipient or the employee or agent
responsible for delivering it to the intended recipient, you are hereby
notified that you are in possession of confidential and privileged
information. Any unauthorized use, disclosure, copying or the taking of any
action in reliance on the contents of this information is strictly
prohibited and may be unlawful. If you have received this e-mail
unintentionally, please immediately notify the sender via telephone at (773)
406-2464 or email.



On Tue, Feb 14, 2012 at 10:10 AM, Koutsouleris, Nikolaos <[log in to unmask]> wrote:
Dear all,
 
you could try out Christian Gaser's TFCE toolbox available at http://dbm.neuro.uni-jena.de/tfce
it can be used to apply the TFCE approach to already estimated SPM experiments.
Good luck!
 
Nikos Koutsouleris
NeuroImaging Lab
Department of Psychiatry and Psychpotherapy
Ludwig-Maximilian-University
Nussbaumstr. 7
80336 Munich
 


Von: SPM (Statistical Parametric Mapping) [mailto:[log in to unmask]] Im Auftrag von Armand Mensen
Gesendet: Dienstag, 14. Februar 2012 15:57
An: [log in to unmask]
Betreff: Re: [SPM] Using multiple cluster-defining thresholds in the same study

Hello,

Jonathan beat me to it but there is a good discussion on the use of multiple cluster forming thresholds in the Smith & Nichols (2009) paper (and further in PMID: 20426085 in terms of non-stationarity).

I have been working with TFCE analysis for EEG datasets and would really recommend trying it out in all cases (but especially if you are concerned with the use of multiple thresholds).

As mentioned it is implemented in FSL but I'm sure someone has made some basic scripts for Matlab/SPM by now (I have some basic working scripts which I adapted for EEG datasets in case no one else can help).

Good luck,
Armand
 

On 13 February 2012 22:44, Bob Spunt <[log in to unmask]> wrote:
SPM experts,

I have been using SPM's cluster-level corrected statistics, and I am curious about using multiple cluster-defining (i.e., voxel-level, uncorrected) thresholds in the same study. To make this somewhat concrete, assume I have three conditions: A, B, C. In the first pass, I choose to use the common voxel-level (uncorrected) threshold of p<.001 to define clusters. In A>B, this reveals several clusters that survive correction. However, in A>C it reveals similar clusters but which in this case do not survive correction. Now, let's say that if I drop the voxel-level threshold to p<.01, the clusters emerging in A>C now survive correction at the cluster-level. What are the issues with this procedure? 

From my relatively naive point of view, the only major issue I see is that as you liberalize the cluster-defining threshold, the extent of the observed clusters will increase with a corresponding decrease in confidence in anatomical localization. (In the most absurd case, one can use a cluster-defining threshold of p<1 and will observe one massive cluster - the whole-brain - that survives correction.)

If an investigator is completely transparent regarding their procedures and findings, that is, they fully report the cluster-defining thresholds used in each analysis and details regarding the anatomical extent of the resulting clusters, is there any issue with this procedure? 

Thanks in advance for any tips. 

Cheers,
Bob

-------------------------------------------------------------------------------
Bob Spunt
Postdoctoral Fellow
Social Cognitive and Affective Neuroscience Labs
Department of Psychology
University of California, Los Angeles