On Wed, Nov 14, 2012 at 8:02 AM, Gabor Oederland <[log in to unmask]> wrote:
> Dear Donald and Roberto,
>
>
> thank you very much for your clarifications! Still, there are some open questions.
>
>
> What I want: Run F-tests for main effects and interaction, go with >> uncorrected voxel-level and some cluster-size defining threshold <<. This results in a comfortable number of contrasts of potentially significant clusters (e.g. 3 for n x m-ANOVA), which is another reason why I would prefer the F-tests over a series of t-tests. Further explore any significant (any reasonably large) cluster.
>
> For illustrative purposes I would like to show percentage signal change values. Thus I had thought to simply extract PSC values for the clusters and to compare the values to find out the reasons (well...) for the significant effects in the F-tests (in this case, would it be appropriate to report p-values for such an exploratory analysis at all?).
>>>>The issue with reporting the p-values from these clusters is that the p-value will be uncorrected and could lead to false positives. Take for example, you define a cluster and at the whole-brain level A>B, but not A>C. At the ROI level, A>C is less than 0.05. This would lead you to conclude A is different from C, but only because you have looked at a single region. Thus, you need some form of correction.
>
> But depending on the data/threshold the clusters might be very large in some regions. So there might be quite some loss of information when lumping all the voxels together (imagine something like one part of a large cluster A > B = C, other part A = B > C). Thus I thought of post-hoc t-tests, limited to the voxels of significant clusters. As I don't want to report that 10 voxels inside one cluster showed A > B, 10 others B > C ... there's clearly some need of a cluster-size defining threshold. But, as I said, in case I do not (want to) correct for the number of voxels of the whole brain in the first F-tests, this approach seems to be of no practical use for the post-hoc tests.
>>You could draw sphere's around the peaks. In the case that there are subclusters as you describe, then each subcluster would have its own peak(s). By using sphere's around each peak, you'd be able to see the differences within the larger cluster. You could also use a higher threshold to separate the sub parts of the larger cluster. The same issue as above will hold though. You need to be able to control the type 1 error rate of the post-hoc tests.
>
>
> Any ideas?
>
>
> Best,
>
> Gabor
|