Reply-To: | | [log in to unmask][log in to unmask] [mailto:[log in to unmask]] Sent: Thursday, October 02, 2008 12:01 PM To: Fromm, Stephen (NIH/NIMH) [C] Cc: [log in to unmask] Subject: Re: Multi-masking for Multiple Comparison Correction
I admit that the case where one defines the region of interest using the same contrast one is using for the inference is so perverse I did not think of it. Maybe that one can find a way in which this test makes sense, as Tom does, but definitely it's nothing of practical value.
I would argue that conditionality is the appropriate concept here, since it requires one to make assumptions explicit. It makes no sense to quarrel about the usage of the word 'bias', but I urge Stephen to consider the idea that specifying the assumptions and the condition under which an inference holds is considerably more precise and may well include what he means by this word.
I believe that Doug Burman raises an important point regarding orthogonality of contrasts. Assuming a normal distribution of the errors, then it is indeed the case that the two inferences are independent, since the underlying test statistics are independent. It is not an unusual case: the individual differences example I made in my original post is another example of such a pair of orthogonal contrasts.
The distinction between dependent and independent variables (the point raised from Stephen in the latest post) is not so clear-cut when we are dealing with testing families, but this a rather involved issue. Voxel-level corrected significance values, the gold standard of inference in neuroimaging, are defined conditionally on the subset of voxels where the null hypothesis holds. Thus, they are conditional on a quality of the dependent variable (what Stephen finds objectionable). Having said this, the SPM strategy of using random fields to derive the rejection region always assumes the worst case that the null holds over the whole volume, so that the conditionality is no longer present. However, the whole literature on repeated testing and strong control (such as the relevant parts of Hochberg and Tamhane's book) thrives on the opportunity offered by this type of conditionality on the dependent variable (through multi-step testing), which shows that many statisticians find it ok. In the functional ROI case, we do the same thing: prune the testing family on the basis of properties of the dependent variable.
It seems that Stephen needs a test which the conditionality does not include the selection of the subjects on which the second test is carried out. That's fine, it's quite possible that this may be needed for some specific problem. I suspect that the formal definition of the inferential differences that he lists in his latest post would be a challenging task. Be it as it may, I find the functional ROI definition for pairs of orthogonal contrasts logically clear and unobjectionable in the appropriate context.
Stepwise regression (which isn't the same as multi-step tests) has the purpose of modelling the data, not generating valid p values, so is misused in the example of Stephen's post. (It makes a difference, though, if you are selecting on the nuisance covariates only). To obtain valid p values here, you need to include all considered models in a testing family, and derive the appropriate correction.
Cheers, Roberto
Quoting "Fromm, Stephen (NIH/NIMH) [C]" <[log in to unmask]>:
> " 'In regions showing greater activation for angry and happy faces than > for dots, we found angry faces to produce greater activation than happy > faces'. This would be fine I think." > > I don't think this example gets at the full story, because the statement > Ø“˜sd |