Hi Helmut,
> I am trying to orthogonalize data: RT = RT - X*(pinv(X)*RT);
> I think, spm_orth is the implementatin of this with x=RT (spm uses this
> function to orthogonalize basis functions). It works columwise on a matrix X:
>
> x = X(:,1);
> for i = 2:size(X,2)
> D = X(:,i);
> D = D - x*(pinv(x)*D);
> if any(D)
> x = [x D];
> end
> end
>
> Unfortunately, I am mathematically not very inclined and have come across
> the following observation (=problem to me). My understanding of
> orthogonalizytion is that the colum vectors get 'decorrelated'. To convince
> myself of this effect, I calculated Pearson's R before and after
> orthogonalization, i.e.
>
> corrcoef(X) vs. corrcoef(spm_orth(X))
>
> and found that for a set of [some random] trial data I get higher absolute
> correlation coefficients after the orthogonalization (but they are
> negative). Again, my understanding is that even a negative but higher
> correlation means that I did not achieve what I had wanted.
>
this was a quite fun observation, and I must admit it puzzled me for a
bit.
The vectors returned from spm_orth ARE orthogonal, which you can
convince yourself about with
spm_orth(X)' * spm_orth(X)
However, they do not have zero correlation coefficients since that
implies zero cov(spm_orth(X)), which is not the case.
Your "confusion" simply stems from the fact that the columns do not have
zero mean, which means that their covariances aren't necesarily zero
(i.e. they have a common DC component). If you try
corrcoef(spm_orth(spm_detrend(X)))
you will see that they are all zero.
Good luck Jesper
|