On Thu, 10 Mar 2005 05:28:12 +0000, Daniel Weissman <[log in to unmask]>
wrote:
[snip]
>****The use of a Euclidean normalization raises some important issues
that I
>wouldn't mind getting some feedback about.*****
>
>First, a Euclidean normalization will transform the values 10, 20, 30, 40,
>50, 60, 70, 80 in exactly the same way as it transforms the values 1, 2,
3,
>4, 5, 6, 7, 8. In both cases, you end up with new values that are 0.0700,
>0.1400, 0.2100, 0.2801, 0.3501, 0.4201, 0.4901, and 0.5601. Thus, using
>such a transformation "compresses" the variability present in one set of
>parametric values more than it "compresses" the variability present in the
>other set of parametric values, such that after the transformation both
sets
>of parametric values have exactly the same variance.
The scaling of the columns in the design matrix X doesn't matter, because
there's a duality between X and the betas: if you scale column j in X up
by c, you just scale the corresponding beta down by c. This is clear from
the form of the linear model,
y_i = X_ij*beta_j + epsilon_i
>Now, what if the two sets of parametric regressors are RTs for two
different
>trial types? And, let's say that one wants to determine whether the
>relationship between RT and MR signal is stronger for one trial type than
>for another? If MR signal scales linearly with some absolute measure of
RT,
>then we'll get a higher beta value for the parametric regressor whose
>original, non-transformed values are more variable (e.g., 10, 20, 30,
etc.)
>than for the parametric regressor whose original values were less variable
>(e.g., 1, 2, 3, etc.), even though MR signal scales with RT in exactly the
>same way for both of these regressors! Thus, there appears to be an
>assumption that MR signals scales with some kind of "normalized" measure
of
>RT rather than an absolute measure (e.g., milliseconds). Could someone
>please comment on this?
What's really going on is the issue of the appropriate interpretation of
the betas. Because the two regressors aren't scaled by the same factor,
you can't compare the two betas without taking this into account.
>Second, regarding normalized measures, would there be a big difference
>between using a Euclidean normalization (followed by mean-centering) to
>transform the original parametric values and using a z-transformation?
Both
>methods will accomplish mean-centering and both will involve unequal
>"compression" of the variability present in each of the original sets of
>parametric values (i.e., there will be more compression for the sets of
>parametric values that have greater variance). So, is there any reason
for
>preferring a Euclidean normalization (followed by mean-centering) to a
>Z-transformation of the original parateric values?
The Z-transformation isn't linear and hence shouldn't be used.
If the parametric modulation model you're using is a linear one, then
scaling the values won't change anything; you still have a linear
parametric model. But if you do something like square the values of the
parameter (or do something else nonlinear), rather than just scaling,
you're positing a different model.
>Third, if one assumes that MR signal varies linearly with some absolute
>measure of a parametric value (e.g., milliseconds for RT), then would one
>want to use a mean-centered version of the original parametric values
>without performing a Euclidean transformation (e.g., for the values 1,2,3
>enter -1, 0, and 1)? In this case, one would get the same betas for two
>parametric regressors that differed in the variability of their original
>values (e.g., 1, 2, 3 versus 10, 20, 30)because the variability in MR
signal
>would be proportionally greater for the more variable regressor than for
the
>less variable regressor. However, there might be a magnitude difference
>between the columns that could be problematic. Any thoughts?
As I noted above, the scaling doesn't matter, as long as you interpret the
results correctly.
Best,
S
>
>
>Yours,
>
>Daniel Weissman
>Center for Cognitive Neuroscience
>Duke University
>Durham, NC 27705
|