Hi again Christopher,
I jumped the gun a bit in my previous reply, so please read all of this reply.
> > But, as I said, for PET it is much easier and you DO assume
> > heteroscedasticity (at least across voxels) when you use SPM.
>
> Many thanks for clearing that up for me! I'm pleased to know, and
> yet I
> wonder what this all means wrt sensitivity. Since variance is
> estimatedvoxelwise, there are precious few df... Is there an
> assumption of
> conditions having equal variances -> pooling across conditions?
Yes, there is a homoscedasticity assumption across conditions.
As for df, if you look at sensitivity vs df you will see a steep increase up to roughly 15-20df, after which it starts trailing off. From this it follows (I think) that group-sizes of about 16-20 subjects make a lot of sense, and the all to common 8-10 really means LOTS of type 2 errors (i.e. activations there, but not found).
>
> I think I have to follow up on your parenthetical note. I'm not
> sure I
> understand why I'm asked if I want to model non-sphericity
> (replicationsacross subjects). I understand it as a question,
> whether or not I want
> to estimate variance components (of either the proportional
> scaling or
> ANCOVA normalization model) in my simple model (10subs,
> 5cond/sub). The
You are of course right. In a multi-condition, multi subject PET study you would need to model non-sphericity. This is because errors (model misfit) in one condition in one subject is likely to correlate with error in another condition for this same subject ("good activators" will tend to activate strongly in all conditions and "bad activators" poorly in all conditions). So, if you were to model all your scans in one big model, you would need to model these covariances. And then you are right, there are a lot of parameters to determine from a limited amount of data.
Therefore, in order to determine the "general appearence" of the variance-covariance matrix SPM uses pooling across voxels. Once that "general appearance" has been determined there is a single scaling parameter (think of it as the variance) that is determined on a per voxel basis. The pooling is not across all voxels, but rather across the voxels that survive the threshold (I _think_ the default is 0.05 uncorrected for PET) for an "effects-of-interest" F-contrast. I.e. across voxels that are possible activated in one or more conditions.
I honestly can't say just how well validated and/or motivated the assumtions behind this pooling are. It's a bit of a "damned if you do damned if you dont situation". If you don't pool the variance of your estimates are going to be quite big, and that may bias the inference. On the other hand, if you do pool, you may potentially also bias things if the true " general apperance" of the variance-covariance matrix is very different in different parts of the brain.
I could certainly think of examples/cases where that could be the case. It is quite accepted (I think) that variance tend to be higher in areas that exhibit some activation. Now imagine we have a case where area A is activated in condition X and area B is activated in condition Y, and that we model unequal variances between conditions. Data from area A would then favor a model with higher variance in condition X, whereas data from area B would favor one with higher variance in condition Y. And I guess SPM would end up deciding somewhere between the two.
At least that would be my concern. Maybe Will can comment if I'm really off the mark?
> ensuing ReML crashes with each iteration resulting in a NaN.
>
> Temporal non-sphericity (over voxels) : ...REML
> estimation...
>
> Surely there are not enough data to model the variance structure
> of this
Yes, I've seen this quite a lot as well. One explanation _might_ be that very few voxels survived the threshold (indicating that you have un-modelled effects in your data) which would give SPM a very poor E*E' (I think it is called CY in the code) to work with. But to be honest I have seen this happen also when CY has looked perfectly OK.
Maybe someone else has any experience of that.
> PET design? Could you give an example where non-sphericity correction
> made more sense (or how I should make more sense of the question)?
Now, here comes the point I would like to make. Do *NOT* model variance components unless you absolutely have to. An example of such a case would be when taking more than one parameter estimate per subject and contrast (for example when using an FIR model) to a second level model. Then I dont think there is any other option.
In your case there is. Set up a "Multi-subj: cond x subj ..." PET model (which will model the effects for each subject separately), create your contrasts, one per subject, at the first level and use the ensuing con* images in a second level "Basic Model->One sample t-test" model. This works regardless if your have "simple" condition comparisons [1 -1 0 0 0] at your first level, or if you are doing something parameteric (as your subject title indicates) [-3 -1 0 1 3].
Of course you may not find much, given that there will be 9 degrees of freedom. But at least in that way you do not run the risk of being severly biased (in either direction) in your inference by poor variance component estimates.
Also, I guess you can always do what we used to do back when I did PET. Ignore the problem and hope no one notices (to be honest we/I didn't even know it existed).
Good luck Jesper
|