Hi,
I do apologise for long question, but I got quite into details and spend
LOADS of time on the problem.
I would like to run multiple linear regression–like analysis of FA data
(call them Y) using Randomise.
(I have some experience with multiple linear regression theoretically and
using SPSS type of software, but have troubles with the GLM type of
feeding data. I assume in multiple regression there is no need for
orthogonalising, neither pre-regressing-out, simply the regression
coefficients are computed, controlling every variable for all others)
In my design, the simple version is:
I have one EV of interest (call it X - psychometric score) and several
EV’s of no interest (call them C’s) I would like to control for (age, sex,
…).
What I have done is that I put all the C’s into the –x option matrix, and
left only the X and constant in the design matrix.
I am still unsure about what the results actually say, since:
C’s in the –x matrix are NOT orthogonal to each other
(Should I orthogonalise them?)
C and X are not orthogonal
(Should I orthogonalise the X with respect to all C’s?)
I am not sure whether including the constant in the design matrix when
using Randomise Is OK, since the constant and the X are not orthogonal
(normally I guess it is equivalent to demeaning both the FA data and
design matrix).
What is randomise actually computing if I include X in the first column,
constant in the second one and set contrast [1 0]?
Another question is about the “regressing out”. From earlier postings, I
understand that C is regressed out of the data Y (Y’=Y-r(YC)*C), not from
the X. In other words, it is NOT orthogonalising X with respect to C, but
the data with respect to C.
I guess if X and C are orthogonal, if I regress Y’ on X, I get to the same
results as if I run multiple regression (Y on X and C) to find out the
regression coefficients.
If X and C are not orthogonal, how can I interpret the results/ simulate
the multiple regression?
Later, I am interested in more than one EVI (X having more columns). They
are NOT mutually orthogonal. In MR, I would just feed them in, the model
would control all variables for all, EV of interest and confounds alike,
no orthogonalising nor regressing out first.
What randomise does when I feed them all into design matrix, without
orthogonalization? (I use then contrasts eg [10], [0,1], forgetting about
my troubles with using adding constant in the model)
Do I get analogously interpretable results as if normally feeding them
into multiple regression?
Or do I have I run the analysis several times, orthogonalising given EV
for all the others to control it for them?
Last question:
Can I reasonably interpret the contrast [1 -1] between two non-orthogonal
variables?
I am sorry if I did not express myself clearly,
Thanks a lot for any help,
Jaroslav Hlinka
|