Hi SPMers,
I've been reading SPM's book (chapter 12) and I have a hard time figuring
out how the random effect analysis is performed:
http://www.fil.ion.ucl.ac.uk/spm/doc/books/hbf2/pdfs/Ch12.pdf
For every subjects i=1..N and every time points t=1..T, we have
y_it = beta_i . x_it + e_it
where x is the "design vector" (for simplicity, it's not a matrix), beta_i
is a random variable (Gaussian, mean=mu, variance=sigma^2) and e is an iid
noise term (zero mean, sigma_e^2 variance). I also ignore the mean for
simplicity.
The beta_i 's are estimated separately with:
\hat{beta_i} = (x_i' . x_i)^{-1}. (x_i' . y_i)
We can see that they are also Gaussian with:
mean = mu
variance = sigma ^2 + (x' . x)^{-1} sigma_e ^2
(I simplify here with the same x' . x for every subjects, which is the case
in my experiment).
Now, we have a collection of N Gaussian random variables, the
\hat{beta_i}'s, which we can test with a t-test. So we look at the variable:
t = sqrt(N) * sample_mean(beta) / sample_std(beta)
where
sample_mean(beta) = 1/n * sum _i \hat{beta_i}
sample_std(beta) = sqrt( 1 / (n-1) sum _i (\hat{beta_i} - sample_mean(beta))
and t follows a Student-t distribution with N-1 degrees of freedom.
QUESTION #1:
Am I correct reading the Chapter 12 of the SPM book? I.e. is the procedure
above the one used in SPM?
Provided I am correct with my interpretation, here's my follow-up question.
Something bothers me in the fact that the variance of \hat{beta_i} still
contains sigma_e ^2. From reading Cheng Hsiao's book on the "Analysis of
Panel Data", it looks like there might be a way to better estimate the
beta's and have a more powerful test.
QUESTION #2:
Is the t-test used for random effect analysis in SPM the most powerful for a
given p-value? I.e. am I misready Hsiao?
I hope I was clear with my two questions. Thank you in advance for anybody
who takes the time to reply.
Antoine
--
Contact info:
http://www.bruguier.com/contact.html
|