Dear Jeff,
Because, to my opinion, taking T- or Z-values directly to the 2nd level is just plain wrong (OK, just to make a point).
I am not a statistics 'guru', but here is my thought experiment:
Let's assume in some voxel V you have a signal with amplitude A, for each subject (or a relative A, with respect to the global mean, such that after scaling all subjects have a signal in voxel V with signal amplitude A).
When we would add different amounts of noise (=error signal) randomly to each subject, with a different noise amplitude, because we assume that the noise is different for each subject (different amounts of movement in the scanner, different heart rates, scanner temperature, etc), buth the signal is of similar amplitude.
Then, the estimated regression coefficient B (i.e. B-maps) for a regressor modelling our signal, would reflect this signal amplitude A. When we would now compute a T (or Z) value at the 1st level, we divide this B value (similar for all subjects) by an non explained signal (the error) which is different for each subject. We would then end up with a set of points at the 2nd level with a high variance, not reflecting the varience in the measure we were interested in (singal amplitude A).
B would me much closer to our signal amplitude A, and hence a T -test on these Bs on the 2nd level would reflect a signal with the 'real' variance in signal A. We are simply not interested in the noise of each subject on the 2nd level. As I have heard, a proper way to deal with the 'reliablitity' of estimating A on the first level would be using Bayesian estimation, and reflecting this 1st level noise in the priors. I don't yet know that much about Bayesian statistics though, someone else should jump in here....
I think in the regular repeated measure ANOVA for behavioral data something similar happens, you would for a certain subject take the average of your dependent variable (say RT), which is basically a Beta value for a simple condition on/off regressor. Then, one usually does a T or F-test on these mean RTs, and by doing so ignore the error variance per subject.
But perhaps my assumptions aren't realistic.
Looking forward to other comments,
Bas
--------------------------------------------
Dr. S.F.W. Neggers
dept. of Psychonomics,Helmholtz Institute
Utrecht University
Heidelberglaan 2
3584 CS, Utrecht, room 17.09
the Netherlands
Tel: (+31) 30 253 4582 Fax: (+31) 30 2534511
E-mail: [log in to unmask]
Web: http://www.fss.uu.nl/psn/pionier
--------------------------------------------
-----Oorspronkelijk bericht-----
Van: SPM (Statistical Parametric Mapping)
[mailto:[log in to unmask]]Namens Jeffrey P Lorberbaum
Verzonden: donderdag 11 november 2004 21:12
Aan:
Onderwerp: Re: [SPM] Random effect and scaling
Hi Danny
I have also noticed the same. The global mean may be a 100 (actually it is
not if you look at a within-brain mask, the global mean is generally
higher as per my prior e-mails on the mailbase). In any case, means
(betas) for a given region like the amygdala for a person may fluctuate
around 80 and for another person around 100 at least in my data which
makes percent signal change (b1-b0/bo x 100) and the betas (say beta1) on
different scaling for each person. Why people do not use t or z-maps for
each subject in grouping across subjects is unclear to me.
Thanks,
Jeff
On Thu, 11 Nov 2004, Daniel H. Mathalon wrote:
> Christian,
>
> I have had similar questions about this issue. In typical multiple
> regression analysis implemented in standard statistics packages, and in
> most treatments of the subject in text books, a distinction is made between
> "partial regression coefficients" usually designated "b" versus "beta
> coefficients" which are the standardized b's. Despite the fact that SPM
> refers to the regression coefficients derived from fitting of the HRF to
> the data as Beta's, my understanding is that they are really unstandardized
> partial regression coefficients that are scaled to the units of the time
> series data. Although I believe that there are scaling transformations
> applied to the time series that are intended to produce a mean of 100,
> giving rise to the often stated rule of thumb that the Beta images have a
> rough correspondence to "percent signal change", there have been other
> postings on the SPM list that challenge this assumption. If the mean and
> variances of the time series are different for different subjects, for
> different test sessions, or for different runs, and if they are only
> imperfectly transformed to a common scale prior to model estimation,
> wouldn't it make more sense to pass the standardized "Betas" (which are
> scale-free) to the second level random effects analysis?
>
> I learned of an apparently related concern in connection with a recent
> description of a test-retest reliability analysis of fMRI data from a small
> sample of subjects. The results apparently showed that when the
> unstandardized beta images were the unit of analysis, test-retest
> reliability was poor. However, when percent signal change was calculated
> as the dependent measure, test-retest reliability was substantially
> improved. This could be explained by scaling variation in the Beta images
> across scan sessions.
>
> Any light that could be shed on this issue by the SPM gurus (including
> setting me straight on my perhaps erroneous assumptions) would be greatly
> appreciated.
>
> Dan
>
> >Dear SPM community,
> >
> >When using subject by subject first level analysis, and bringing the
> >con*.img to the second level, a colleague of mine asked me the seemingly
> >simple question of how scaling is handled. Not being scaled in a single
> >design matrix, are the beta values comparable enough? What if the
> >different subjects have dramatically different global's? Any reactions?
> >
> >Christian
> >
> >--
> >Christian Keysers, PhD
> >Assistant Professor
> >
> >BCN Neuro-Imaging Center
> >University of Groningen
> >Antonius Deusinglaan 2 (room 120)
> >9713 AW Groningen
> >
> >Phone: +31 50 3638794
> >Fax: +31 50 3638875
>
> Daniel H. Mathalon, Ph.D., M.D.
> Assistant Professor
> Department of Psychiatry
> Yale University School of Medicine
>
> Mail address: Psychiatry Service 116A
> VA Healthcare System
> 950 Campbell Avenue
> West Haven, CT 06516
>
> Phone (203) 932-5711, ext. 5539
> FAX : (203) 937-3886
> Pager 203-867-7756
> e-mail: [log in to unmask]
>
|