Print

Print


> the jackknife which Marko proposed [...] as it provided a post-hoc power analysis 

This will not necessarily solve the issue though. If there are no clusters to begin with you have to go with a more liberal threshold. But if you're bothered by a threshold to be "too liberal" in the first place, then it won't really help to find out that n-x subjects are sufficient to detect this effect at the chosen, liberal threshold or that it's a consistent finding present in x % of the n-1 models.

> Another option is to use non-parametric (SnMP)

For VBM this might actually be a good idea, given the non-stationarity issue and the corresponding NS-RFT corrections not necessarily performing well (as mentioned in Tibor's linked draft).

> A reviewer is suggesting we use an even more liberal threshold

Actually this is bad science. If you had already detected/reported effects everyone would be happy. Now that you fail to find effects you are advised to lower the thresholds. Yes, it is certainly problematic that we still go with thresholded T maps most of the time and not e.g. also report beta/con estimates at least, thus leaving it open whether the sig. effect is of any relevance and also "hiding" major parts of the analysis (e.g. Allen et al., 2011, http://dx.doi.org/10.1016/j.neuron.2012.05.001 provide an example for integrating beta weights and statistics into a single figure). But you should think of thresholds in advance of course. The same holds when turning from e.g. RFT-based statistics to other testing methods. You should decide in advance. This is frequently seen when it comes to small volume corrections. The SVC is absolutely legitimate, but a proper SVC should be conducted taking into account any of the a-priori regions, which is possibly a large proportion of the brain. However, people often look at whole-brain statistics at a certain threshold, detect some effects for some of their a-priori regions, fail to find some for others, and only then come up with a SVC restricted to the "remaining a-priori" regions. One can expect a similar bias for people turning from a default threshold like .001 uncorrected to another once they've failed to detect effects (and not because they consider a certain approach to be more appropriate).

More generally, it would be worth a note to consider the role of the reviewers. Reviewers often ask for additional or different analyses, but in fact, the results are biased then, as the suggestions are based on "lurring". So they should only be allowed to criticise incorrectly perfomed analyses ;-)

Turning back to your data, as stated, I'd go with a ROI approach, as it's the most obvious one to me. You have some literature, you have some hypotheses, you decided to go with an established threshold, you fail to find effects, you provide additional information for the a-priori regions.

Best

Helmut