Hi,
On 6 Jul 2007, at 20:08, Anna Engels wrote:
> You recommended demeaning each questionnaire, creating the
> interactions
> based on these demeaned evs, and not worrying about any further
> orthogonalization. Our understanding is that orthogonalizing EV 1
> wrt to EV
> 2 gives all the shared variance to EV 2. We had originally planned to
> orthogonalize the interactions wrt the questionnaires so that the
> variance
> shared by the interaction and questionnaire evs would go to the
> questionnaires. How would creating the interactions from demeaned
> questionnaires accomplish the same goal?
If you explicitly want to force all the shared variance into the
questionnaires then indeed you should orth the interaction wrt the
questionnaires. If you don't orth then any shared variance just
doesn't get used in the final stats (it's only the unique part of the
variance in any given EV that gives rise to statistical significance
for that EV).
> Also, we have been doing some testing of orthogonalization
> procedures. Say,
> for example, we have a model with EVs for the group mean and for two
> questionnaires (A and B). We add in an EV representing the
> interaction
> between the two questionnaires and orthogonalize this EV wrt the
> EVs for
> questionnaires A and B. Since adding in a new EV will account for
> more
> error variance, we would predict that the zstats for the two
> questionnaires
> will change. However, since the interaction EV is orthogonalized
> wrt to the
> questionnaire EVs, we think the PEs for questionnaires A and B
> should be
> unaffected by the addition of the interaction to the model. Is
> that correct?
Indeed. If you add in a new EV which is orth to everything else then
you're right that the PEs (parameter estimates or betas) don't change
for the already-existing EVs. However the new EV can still soak up
error variance, i.e., reduce the residuals, so the zstats can
increase on the original EVs.
> We did some tests of the hypothesis that the PEs will remain
> unchanged and
> have found some confusing results.
> We first ran two HLA's using raw (i.e. not demeaned) questionnaire
> scores.
>
> Model 1: Composed of 3 EVs. One EV for the group mean, one for
> questionnaire A, and one for questionnaire B. None of the EVs were
> orthogonalized to each other.
> Model 2: Composed of 4 EVs. One EV for the group mean, one for
> questionnaire A, one for questionnaire B, and one for the
> interaction of
> questionnaire A and B (created by multiplying A and B together). The
> interaction EV was orthogonalized wrt to the EVs for questionnaire
> A and B (
> i.e. clicked the buttons underneath the interaction EV
> corresponding to A
> and B).
>
> We compared the questionnaire A PE for model 1 to the questionnaire
> A PE for
> model 2 and found that they were not identical. The max difference in
> intensity between PEs for model 1 and model 2 was 8.67. Since the
> maximum
> intensity of PEs for model 1 and model 2 was approximately 9, the
> difference
> of 8.67 seems large. We repeated this for questionnaire B and found a
> similar difference.
Sure - three things:
1. It may not be very informative to look at the max of the
difference. I would recommend subtracting the two PE maps and viewing
the difference and its histogram in fslview to get a more complete
feel for how different they really are.
2. I'm guessing that you were using FLAME ME modelling and not OLS
for the higher-level analysis. I would expect OLS to give very close
to exactly the same results in the two cases (though see point 3),
but the more complex modelling in the ME options may give slightly
different answers (though should still be similar overall).
3. OR this may be due to a bug which we recently found in FEAT and
which will be fixed for the new release - if you orth an EV wrt more
than one other EV it does the orth one at a time, which can be
inaccurate if the two EVs you're orth wrt are not already orthogonal
to each other (see recent emails about randomise). If your A and B
are not orth then that may explain the result.
> We are unclear about the source of this large difference in
> questionnaire
> PEs between the two models. We then repeated this test, but this
> time we
> orthogonalized all EVs wrt to the group mean ( i.e. we clicked the
> button
> under each EV corresponding to the group mean). We again compared the
> questionnaire PEs for the two models and found that they were still
> not
> identical, but this time the difference was smaller (the max
> difference in
> intensity was 3.8, the maximum intensity of both PEs was
> approximately 9).
>
> We are left confused about how to set up our model and have a few
> questions.
> 1. Why is there a difference in questionnaire PEs for model 1 and 2?
> 2. Why does orthogonalizing wrt to the group mean reduce this
> difference?
Probably because then A and B are closer to being orth wrt each other
(see comment above on the bug....).
> 3. Is there a way to make the PEs for model 1 and 2 equivalent
> (and is this
> a desirable goal)?
> 4. Given that there is a difference, which model is more
> appropriate for
> interpreting the effects of questionnaire A and B?
For now you could get around the bug and still use model 2 by:
Turn off the orthogonalisation of EV4 and save the design to tmp.fsf
Edit tmp.mat and remove all the header stuff, leaving just the design
matrix numbers
mv tmp.mat tmp.txt
In matlab:
x=load('tmp.txt')
newEV4 = x(:,4) - x(:,2:3)*(pinv(x(:,2:3))*x(:,4))
save 'newEV4.txt' newEV4 -ascii
Now you can insert the values from newEV4 into EV4; don't orth EV4 in
the GUI.
Hope this makes sense!
Cheers.
>
> Thanks for your help,
> ~Anna
------------------------------------------------------------------------
---
Stephen M. Smith, Professor of Biomedical Engineering
Associate Director, Oxford University FMRIB Centre
FMRIB, JR Hospital, Headington, Oxford OX3 9DU, UK
+44 (0) 1865 222726 (fax 222717)
[log in to unmask] http://www.fmrib.ox.ac.uk/~steve
------------------------------------------------------------------------
---
|