Dear Reut,
> I mean that if for example i have 15 subjects and each one of them
> have
> scores of 3 different behavioral tests - A B and C and age as a
> confounding
> variable the design.con can look like this:
>
>
> 1 0 0 0
> -1 0 0 0
> 0 1 0 0
> 0 -1 0 0
> 0 0 1 0
> 0 0 -1 0
>
> or i can split everything up and have 3 different design.con that
> looks like this:
>
> 1 0
> -1 0
>
> and run the randomise on each design separatley
> question is it possible that the results would be different in
> both cases?
> hope it's more understandable now
it is quite likely that the results will be different when you run an
"all inclusive" design compared to if you run them separately. When
you test for an effect in the "full" model you'll test for anything
that can be explained by that EV/EVs after anything that can be
explained by any of the other EVs has been removed. The reason this is
a good thing is because if you have two EVs that are correlated, e.g.
age and "duration of disease" then anything (in that brain) that is
correlated to those can be cause either by age, OR by duration of
disease. Hence, it is not really clear what EV to assign that effect
to, and then it is better to play it safe and not assign it to either.
A crucial part of good design is to identify such correlations already
on the planning stage and try to minimise it through the way one
recruits, designs the task etc.
Hence, my recommendation would be always include all of ones EVs in a
"full" design.
Good luck Jesper
|