Dear all,
I am wondering whether the following statistical testing has some
sensible rationale!!!
I would like to formulate and test different model (design matrices) for
different parts of the brain.
To simplify my reasoning, let's say that a set of regressors X_A is
defined for voxel A and a different set of regressors X_B is specified
for voxel B.
In addition to these regressors, other regressors, i.e. the realignment
parameters, can be appended to form two different design matrices.
After calculating the estimates of the coefficients for each voxel and
its corresponding model, one could compute an F-statistic for each voxel
which indicates the statistical significance of each set of regressors
X_A and X_B against the complete design matrix. As each design matrix
could have different number of regressors and different collinearity
between the regressors, the degrees of freedom of each test would be
different in a general case.
But eventually one would have two different uncorrected p-values for two
voxels.
In the general case, the result would be a map of uncorrected p-values
or F-statistics but with different degrees of freedom for each voxel.
In addition, we would like to correct the p-values with some multiple
hypothesis correction procedure, such as FDR.
So my questions are two:
- Would it be statistically correct to compare the p-values?
The F-statistics could not be compared because they had different
degrees of freedom.
But t-statistics could be normalized to z-values and then be compared,
is that right?
- Would it make sense to correct these p-values with the same procedure?
Thanks very much in advance for your help.
Cesar
This message has been checked for viruses but the contents of an attachment
may still contain software viruses, which could damage your computer system:
you are advised to perform your own checks. Email communications with the
University of Nottingham may be monitored as permitted by UK legislation.
|