Thank you very much for your quick (and encouraging) reply. I think I have
now good arguments to discuss the problem with the editor. However, I'm not
sure if the reviewer is or not familiar with the standard models in PET
analysis. Actually, he quote a chapter's book that I was unfortunately
unable to consult:
Woods, Iacobini, Grafton, Mazziotta. Improved analysis of functional
activation studies involving within-subject replications using a three-way
ANOVA model. In : Quantification of Brain Function using PET, Myers,
Cunningham, Bailey, Jones (Eds), Academic Press, San Diego, CA, 1996, pp.
Do you know about this technique ? I don't know if it represent a standard
for people who use other statistical packages than SPM or if it is a new
proposal that the authors have done in this chapter. I think it will be
helpful for me to know it before presenting my arguments to the editor.
All my thanks for your kind responses and your disponibility in the SPM
list. This really helps !
With my bests,
PS : I reply first on your address, but it seems that there is a delivery
problem : the message come back with the following advertisment :
The original message was received at Tue, 30 Nov 1999 14:43:44 +0100
from cs215.gw.ulg.ac.be [184.108.40.206]
----- The following addresses had transient non-fatal errors -----
<[log in to unmask]>
----- Transcript of session follows -----
<[log in to unmask]>... Deferred: Connection timed out with
Warning: message still undelivered after 12 hours
Will keep trying until message is 1 week old
>> I have a PET study with 12 subjects, 4 conditions repeated thrice in a
>> randomised order (e.g., A B C A D C B A D B C D), and the data
>> processing was done under SPM96 (it's a rather old study). Only the
>> differences between conditions A & B and the differences between
>> conditions C & D are experimentally valid (same input modality and
>> output response for each member of the pair, only the nature of the
>> stimulus differs), and I was interested with the common difference
>> between (A-B) and (C-D), hence a conjunction analysis.
>> I used a multi-subject multi-conditions with replications design, which
>> generates 117 degrees of freedom (= 12 subjects X 12 scans  minus
>> global )effects , block effects  and conditions , hence (144
>> - ((12 + 12 +4) -1)= 117), if my computation is correct.
>> However, a referee states that to ensure result's replication it
>> should be better to use a three-way multivariate design modelling task
>> by subject interaction, task by repeat interaction, and subjects by
>> repeat interaction. Hence, in his opinion, such a mutivariate design
>> (12 subjects by 4 tasks by 3 repeats) should generate 11 by 3 by 2 = 66
>> df's. As far as I understand the SPM implementation, this kind of
>> design should be generated only using RFX. Am I wrong here ?
>You could model the subject by condition interactions in a fixed
>effect-analysis (simply by modelling the condition-specific effects for
>each subject seperately). or you could use the subject by condition
>interactions as [effectively] the error variance in a random-effects
>analysis. Howvever, in PET, the conventional analysis would be a
>fixed-effect one as you have already implemented. There is no
>precedent for analysing all the replication and subject interactions in
>the PET neuroimaging literature (and you should this clear to the
>> However, with one adjusted mean image per condition per suject(in
>> SPM96), it is impossible that the RFX design generates 66 df's since
>> only 48 adjusted mean images (12 subj by 4 cond) are entered in the
>> second-level analyse. In SPM99, given Andrew's explanations in response
>> to my previous message regarding RFX (28/10/99; thanks a lot, by the
>> way), I suppose I have to enter the con_?.img resulting from individual
>> comparisons at the fisrt level (A-B and C-D separately for each
>> subject) which results with 24 con_?.img to enter in the second level
>> t-test.Hence, again 66 df's is clearly not possible.
>This is correct but I would not go down this road (see above).
>> Another problem is that I'm not really sure if it is correct to keep
>> separated the two subtractions in RFX because Andrew state that usually
>> only one contrast per subject should enter in the 2nd level. On the
>> other hand, to estimate a simple main effect of the 4 conditions (+A
>> -B +C -D) is not valid from a cognitive conjunction perspective. In
>> response to a somewhat similar problem, Karl proposed (02/02/99) to :
>> "take estimates of the simple main effects to the second level (e.g
>> with contrasts 1 -1 0 0, 0 0 1 -1) and model these seperately (using
>> 'multiple regression' in 'Basic models' and [0,...0, 1,...1] for the
>> first regressor and [1,...1 0,...0,] for the second). A conjunction of
>> second level contrasts [0 1 and 1 0] should give you what you are
>> after". However, I tried to do that, but SPM (99) refuse to estimate a
>> simple contrast [1 0] : only 1 -1 is accepted, hence conjunction is not
>> possible. Did I make a mistake here?
>No - this problem has been encountered before and is due to the fact
>that a constant term is added to the basic options design matrix at the
>second level, thereby rendering the contrasts inestimable.
>> So my questions are :
>> Am I correct to suppose that the kind of model proposed by the referee
>> should only be computed using RFX ? And how to justify the df's
>> discrepancy ?
>No. Stick to your original analysis and ensure the editor knows the
>reviewer is not familiar with the standard statistical models employed
>in PET data analysis. Your fixed-effect analysis is perfectly fine as
>long as your inference is restricted to the subjects studied. Even if
>there are profound subject by condition interactions, you are not
>obliged to model them if your model is valid and sufficient to make the
>inferences you want to make.
>I hope this helps (and doen't get you into trouble!) - Karl