Hi Simon,

Yes others will disagree .. Mean centering each condition = H0.
That's the whole point of the approach, the variance must be left intact. Then you resample.

From SPM perspective, the issue is indeed in the variance estimation where only survivor voxels from a F over all columns (spm_spm) are used.

Cyril

Ps: I agree that if rft, cluster fdr or Bayes gives nothing, any other technique revealing something would not be very trustworthy ... (but it's different for Bruno)

Dr Cyril Pernet
BRIC / SINAPSE
University of Edinburgh

----- Reply message -----
From: "Eickhoff, Simon" <[log in to unmask]>
Date: Wed, May 16, 2012 04:02
Subject: [SPM] AW: [SPM] AW: [SPM] AW: [SPM] Estimating a second-level model under the null hypothesis
To: <[log in to unmask]>

Dear Bruno


This may be just my feeling and potentially others on the mailing-list may disagree. But I would advice to stick to the classical approaches (FWE, cluster-level FDR) or use established alternatives (PPMs, permutation). Apart from methodologicalproblems with the bootstrap, you should also keep another perspective in mind: If these conventional approaches did not yield any significance, how much would you trust your results should you gain some from the bootstrap? If neither classical nor Bayesian nor permutation-based approaches show anything significant, there probably isn't anything.


The problem with bootstrap, that remains, is, that adjusting the means is not solving the problem, you need to generate data under a null-hypothesis for both mean and variance.

Plus, in theory, mean-centering should not shange the ResMS when keeping everything else constant. But remember, the REML-estimation and whitening. This will entail changes in your ResMS and hence your idea will not work.


Best
Simon


===================================

Univ.-Prof. Dr. med. Simon B. Eickhoff

Cognitive Neuroscience Group
Institute of Clinical Neuroscience and Medical Psychology
Heinrich-Heine University Düsseldorf
Telefon:  +49 211 81 13018
Fax:        +49 211 81 13015
eMail:     [log in to unmask]

and

Brain Network Modelling Group
Institute for Neuroscience and Medicine (INM-1)
Research Center Jülich
Telefon: +49 2461 61 8609
Fax:       +49 2461 61 2820
eMail:     [log in to unmask]

________________________________________
Von: Bruno L. Giordano [[log in to unmask]]
Gesendet: Montag, 14. Mai 2012 18:41
An: Eickhoff, Simon
Cc: [log in to unmask]
Betreff: Re: [SPM] AW: [SPM] AW: [SPM] Estimating a second-level model under the null hypothesis

Hi Simon,

I agree with you: lowering the threshold is not a good idea. I will
check SVC. However, my analysis is whole brain.

Yes, permutation (as opposed to bootstrap) is the obvious cure to these
problems. The two are not equivalent, however. My reading of Efron &
Tibshirani's 1993 book, p. 224 is that the bootstrap approach allows
better targeted null hypotheses, hence my preference.

Insisting on the bootstrap: my thought is that in principle the mean
centering should not affect the ResMS image, hence the idea of plugging
in what estimated with non-mean-centered data into the analysis of
mean-centered data.

Thank you for your suggestions,

       Bruno


On 14/05/2012 5:20 PM, Eickhoff, Simon wrote:
> Dear Bruno
>
>
> The problem is, when you use non-sphericity correction in your actual analysis, you will not be able to produce data under the null-hypothesis given the F-threshold. You *could* lower that to, e.g., p<0.05 or even p<0.5 to force SPM to consider also those voxels in the REML estimation where your model doesn't fir, given that it actually fits nowhere. But it's hard to predict, what will happen then.
>
> As an alternative, why don't you just use SVC as implemented in SPM? This would be the much more straightforward approach to inference in a particular ROI. In particular, since you will be quite vulnerable with your null-distribution given that there is no principaled way to specify the variance under the null-distribution a priori. As an alternative, you could use permutation-based statistics (SnPM or the randomise tool in FSL)
>
>
> Best
> Simon
>
>
> ________________________________________
> Von: SPM (Statistical Parametric Mapping) [[log in to unmask]]&quot; im Auftrag von&quot;Bruno L. Giordano [[log in to unmask]]
> Gesendet: Montag, 14. Mai 2012 18:13
> An: [log in to unmask]
> Betreff: Re: [SPM] AW: [SPM] Estimating a second-level model under the null hypothesis
>
> Hello Donald and Simon,
>
> the reason for analyzing such mean-centered data is that I need to
> estimate the maximum value of my statistics within the analysis mask
> under the null hypothesis of no condition effect. This involves a
> resampling scheme (I am referring to the bootstrap-F principle): the
> FWE-corrected p-value of the statistic computed with unaltered data can
> be computed from the observed distribution of the maximum statistics
> across analyses of resampled mean-centered data (percentile method).
>
> Donald: I tried adding a constant, it did not help, and disabling
> sphericity corrections and assuming independence is not a good idea.
>
> I am thinking that the correct approach here would be to consider the
> ResMS and variance estimates from the unaltered data, and plug in those
> when analyzing the mean-corrected data. It is not obvious if this would
> be enough (e.g., should I consider eventual whitening? I am not 100%
> familiar with the mechanics).
>
> Thank you for your thoughts,
>
>          Bruno
>
>
>
>
> On 14/05/2012 4:46 PM, MCLAREN, Donald wrote:
>> What you want to do is to disable the REML estimation