Stephen,
Thanks for your response.
I did a Google search on "standardized regression coefficient" and
pulled the attached pdf off a website from U. of Texas. It provides
a short description of the terminology I referred to in my e-mail.
In short, regression coefficients are those obtained if the
regressors are completely orthogonal, that is they do not depend on
the other regressors in the multiple regression model and would have
the same value if they were estimated in a simple bivariate
regression. "Partial" refers to the case where the regressors are
not orthogonol, in which case the regression coefficient reflects
adjustment for the correlation among the regressors. Finally, if the
data are standardized first ( i.e., dependent variable and regressors
are converted to z-scores), the coefficients are often referred to as
Betas, or standardized regression coefficients.
The test-retest reliability data I alluded to are not yet published.
I heard the data presented at recent meeting of the fBIRN (a
multi-site fMRI consortium). More generally, I think a careful side
by side comparison of alternative units of analysis for second level
random effects analysis would be quite useful. Such a comparison
should be made both in terms of test-retest reliability and validity
(e.g., sensitivity to a known effect in a specific brain region).
Whether scaling each subjects' data by their time series variance (as
would be done with standardized coefficients) would be helpful or
harmful to the sensitivity of second random effects analysis remains
unclear to me. You are correct that the test-retest reliability
analysis I heard about did not specifically examine this.
Best,
Dan
>On Thu, 11 Nov 2004 10:03:31 -0500, Daniel H. Mathalon
><[log in to unmask]> wrote:
>
>>Christian,
>>
>>I have had similar questions about this issue. In typical multiple
>>regression analysis implemented in standard statistics packages, and in
>>most treatments of the subject in text books, a distinction is made
>between
>>"partial regression coefficients" usually designated "b" versus "beta
>>coefficients" which are the standardized b's.
>
>I wasn't able to find any meaningful distinction between "partial
>regression coefficients" and "regression coefficients," either from a
>Google search or from my book on multiple regression (Netter et al.,
>_Applied Linear Statistical Models_). It appears "partial" is just a
>redundant term of emphasis.
>
>> Despite the fact that SPM
>>refers to the regression coefficients derived from fitting of the HRF to
>>the data as Beta's, my understanding is that they are really
>unstandardized
>>partial regression coefficients that are scaled to the units of the time
>>series data. Although I believe that there are scaling transformations
>>applied to the time series that are intended to produce a mean of 100,
>>giving rise to the often stated rule of thumb that the Beta images have a
>>rough correspondence to "percent signal change", there have been other
>>postings on the SPM list that challenge this assumption.
>
>There are many sorts of scaling one can use.
>
>First, there's global/proportional scaling, where each volume is scaled to
>its mean. (I'm ignoring the issue of whether the mean is computed over
>the entire volume or a set of intracerebral voxels.) There seems to be a
>consensus that global scaling isn't needed anymore in fMRI because low-
>frequency drifts are dealt with by high-pass filtering; and it might be
>harmful by introducing artifactual deactivations, etc.
>
>Second, there's grand mean scaling, which SPM does implicitly at the
>subject level, in which the mean used to scale is computed over all
>volumes in a given session (i.e., "run").
>
>Third, some people advocate using what I've termed "voxelwise" scaling,
>where each voxel is scaled separately (again, with its mean computed over
>the session/run).
>
>Grand mean scaling gives doesn't give true percent signal change, as the
>mean is over the entire volume. Voxelwise scaling notionally *does* give
>percent signal change. As far as I can tell, it's not agreed upon that
>voxelwise scaling is necessarily better; it might have some technical
>disadvantages. Furthermore, due to partial volume effects it's not clear
>that percent signal change in a voxel is that meaningful. For me, this is
>an empirical question that would have to be answered by looking at actual
>data (though you do make a claim in this regard below).
>
>>variances of the time series are different for different subjects, for
>>different test sessions, or for different runs, and if they are only
>>imperfectly transformed to a common scale prior to model estimation,
>>wouldn't it make more sense to pass the standardized "Betas" (which are
>>scale-free) to the second level random effects analysis?
>
>That idea (scaling by variance) is an interesting one; I haven't heard it
>discussed much at all, though I did recently come across a message that
>appears to make the same suggestion,at
>http://www.jiscmail.ac.uk/cgi-bin/wa.exe?A2=ind0011&L=spm&P=R6768&I=-1
>
>>I learned of an apparently related concern in connection with a recent
>>description of a test-retest reliability analysis of fMRI data from a
>small
>>sample of subjects. The results apparently showed that when the
>>unstandardized beta images were the unit of analysis, test-retest
>>reliability was poor. However, when percent signal change was calculated
>>as the dependent measure, test-retest reliability was substantially
>>improved. This could be explained by scaling variation in the Beta images
>>across scan sessions.
>
>That's extremely interesting. Is that a published result?
>
>Note, however, that using percent signal change is not equivalent to using
>standardized coefficients. The former involves scaling the raw data; the
>latter involves scaling subject-level coefficients by their standard
>deviation.
>
>>Any light that could be shed on this issue by the SPM gurus (including
>>setting me straight on my perhaps erroneous assumptions) would be greatly
>>appreciated.
>>
>>Dan
>>
>>>Dear SPM community,
>>>
>>>When using subject by subject first level analysis, and bringing the
>>>con*.img to the second level, a colleague of mine asked me the seemingly
>>>simple question of how scaling is handled. Not being scaled in a single
>>>design matrix, are the beta values comparable enough? What if the
>>>different subjects have dramatically different global's? Any reactions?
>>>
>>>Christian
>>>
>>>--
>>>Christian Keysers, PhD
>>>Assistant Professor
>>>
>>>BCN Neuro-Imaging Center
>>>University of Groningen
>>>Antonius Deusinglaan 2 (room 120)
>>>9713 AW Groningen
>>>
>>>Phone: +31 50 3638794
>>>Fax: +31 50 3638875
>>
>>Daniel H. Mathalon, Ph.D., M.D.
>>Assistant Professor
>>Department of Psychiatry
>>Yale University School of Medicine
>>
>>Mail address: Psychiatry Service 116A
>> VA Healthcare System
>> 950 Campbell Avenue
>> West Haven, CT 06516
>>
>>Phone (203) 932-5711, ext. 5539
>>FAX : (203) 937-3886
>>Pager 203-867-7756
>>e-mail: [log in to unmask]
|