Hi everyone,
I have to add my 2 cents to the discussion on percent signal change...I
think that beta values are SOMETIMES equal to percent signal change, but
not always. Betas can have arbitrary scaling, so they don't always give
you % change, but they should give you something proportional to it.
Even if your model has high error (high variance), the beta is always your
best guess as to what the change in signal is. If the error variance is
high, the beta might not be significantly different from zero - but this
is a different question than "what is the % signal change." So, you might
have a timeseries where you estimate the % change, and it's, say, .1%.
If properly scaled, a beta of .1 = a % change of .1% - but whether one
should infer that's a "real" change, or random noise, depends on the
significance of the beta.
About scaling: if you mean center your predictors, and then measure betas,
you lose information about the baseline level of signal, so you lose info
about % change from that baseline. I think - correct me - that if your
HRF is normalized so that the height of the impulse response function is
1% of the baseline signal, then a beta of 1 = 1% signal change. This
isn't guaranteed to be the case. So I'm not sure right now how to
normalize betas to get % change, but maybe someone can say...
Anyway, about the issue of parametric maps: I think you have to have a
statistical map so that you know what's significant. This is also good if
you're trying to find reliable changes. If, however, you're more
concerned with which are the LARGEST changes rather than which are the
most reliable, you might want to make a % signal change map for
significant voxels only. I think this would be a very useful way to
summarize results.
So please correct my faulty thinking on any of these points...
Thanks,
Tor
_____________________________
Tor Wager
Department of Psychology
University of Michigan
Cognition and Perception Area
525 East University
Ann Arbor, MI 48109-1109
Office: 734-936-1295
Home: 734-995-8975
Email: [log in to unmask]
_____________________________
On Mon, 16 Apr 2001, Stephen Fromm wrote:
> Regarding the discussion on betas and % signal change, I'd be interested if
> anyone had comments on the validity of looking at % signal change. I guess
> I'm asking for comments as to why we (the community) use *statistical*
> parametric maps, as opposed to *change* maps (like % signal change).
>
> My vague impression (I'm confining my remarks to fMRI):
>
> Pros: in the best possible world, there would be no noise. We could make
> statements like "this task had a large effect on signal; this stimulus had a
> small effect on signal". (Recall the point made in statistics texts that
> you can have a statistically significant effect that is not important, in
> that the amount of change induced is small---especially when the available
> degrees of freedom is high.)
>
> Cons: we live in a world where there is lots of noise. Hence, statistical
> images are a necessity. Furthermore (at least for fMRI), drawing
> conclusions about % signal change implies that there is some kind of zero
> baseline (in statistical language, that signal is a ratio measure, not just
> an interval measure); and this isn't so clear. (I'd especially appreciate
> comments on this last point. One might make some argument that, by
> linearizing each step in the path from neuronal activity to raw fMRI data,
> there *is* a ratio scale here, but I'm not so convinced.)
>
> Best wishes,
>
> Stephen Fromm, PhD
> NIDCD/NIH
>
|