They were computed by affine registering loads of (227) images to MNI
space. This gave the affine transforms, which were then used to build
up a mean and inverse covariance matrix. In the comments of
spm_maff.m, you should see the following...
%% Values can be derived by...
%sn = spm_select(Inf,'.*seg_inv_sn.mat$');
%X = zeros(size(sn,1),12);
%for i=1:size(sn,1),
% p = load(deblank(sn(i,:)));
% M = p.VF(1).mat*p.Affine/p.VG(1).mat;
% J = M(1:3,1:3);
% V = sqrtm(J*J');
% R = V\J;
% lV = logm(V);
% lR = -logm(R);
% P = zeros(12,1);
% P(1:3) = M(1:3,4);
% P(4:6) = lR([2 3 6]);
% P(7:12) = lV([1 2 3 5 6 9]);
% X(i,:) = P';
%end;
%mu = mean(X(:,7:12));
%XR = X(:,7:12) - repmat(mu,[size(X,1),1]);
%isig = inv(XR'*XR/(size(X,1)-1))
You may be able to re-code this to work with your 2D transforms. Note
that I only use the components that describe shears and zooms. Also,
if I was re-coding it all again, I would probably do things slightly
differently, using the following decomposition:
J=M(1:3,1:3);
L=real(logm(J));
S=(L+L')/2; % Symmetric part (for zooms and shears, which should be penalised)
A=(L-L')/2; % Anti-symmetric part (for rotations)
I might even base the penalty on lambda(2)*trace(S)^2 +
lambda(1)*trace(S*S'), which is often used as a measure of "elastic
energy" for penalising non-linear registration. Of course, there
would also be a bit of annoying extra stuff to deal with due to MNI
space being a bit bigger than average brains are. Figuring out the
mean would require (I think) an iterative process similar to the one
described in "Characterizing volume and surface deformations in an
atlas framework: theory, applications, and implementation", by Woods
in NeuroImage (2003).
Best regards,
-John
On 14 April 2011 17:15, Siddharth Srivastava <[log in to unmask]> wrote:
> Hi all,
> My question pertains specifically to the values of isig and mu that are
> incorporated
> in the code, and I wish to disentangle the effect that these numbers have on
> individual parameters
> during the optimization stage. I am trying to map it to a 2D case, and these
> numbers
> will have to be calculated ab-initio for my case. Before I begin with
> estimating the prior distribution
> for these parameters, I need to check the effect that they have on the
> registered images. Regarding this
> 1) Can someone suggest a way I can examine the effect on each parameter
> separately, if it is
> at all possible.
> 2) Some advise on how to estimate these parameters, if it has been done for
> images other
> than brain images, will be really helpful for me.
>
> Thanks,
> Sid.
>
> On Mon, Mar 28, 2011 at 7:04 PM, Siddharth Srivastava <[log in to unmask]>
> wrote:
>>
>> Dear John,
>> Thanks for your answer. I was able to correctly run my 2D
>> implementation
>> based on your advise. However, I am am not very confident of my results
>> and wanted to
>> consult you on certain aspects of my mapping from 3D to 2D.
>> 1) In function reg(..), the comment says that the derivatives
>> w.r.t rotations and
>> translations is zero. Consequently, I ran the indices from 4:7. According
>> to your previous
>> reply, rotations will be lumped together with scaling and affine in 4
>> parameters, and
>> translations, independently, in the last two. I understand that in this
>> case, the indices
>> run over parameters that have been separated into individual components
>> (similar to
>> the form accepted by spm_matrix). Is this correct for the function
>> reg(..)? In fact, I went back to the
>> old spm code (spm99), and found parameters pdesc and free, and got more
>> confused as to when I should
>> use the 6 parameter, and when to use separate parameters (as if I was
>> passing them to spm_matrix).
>> So, when the loop runs over p1 and perturbs p1(i) by ds, does it run over
>> translations/rotations etc
>> or does it run over the martix elements that define these transformations
>> (2x2 + 1x2)?
>>
>> 2) In function penalty(..), I have used unique elements as
>> [1,3,4], and my code looks like
>> T = my_matrix(p)*M;
>> T = T(1:2,1:2);
>> T = 0.5*logm(T'*T);
>> T
>> T = T(els)' - mu;
>> h = T'*isig*T
>>
>> Assuming an application specific isig, is this correct?
>> 3) I have not yet incorporated the prior information in my code,
>> But for testing purposes,
>> can you suggest some neutral values for mu and isig that I can try to
>> concentrate on the
>> accuracy of the implementation minus the priors for now?
>> 4) My solution curently seems to oscillate around an optimum. I
>> am hoping that after
>> I have removed any errors after your replies, it will converge gracefully.
>>
>> Thanks again for all the help.
>> Siddharth.
>>
>>
>> On Fri, Mar 18, 2011 at 2:05 AM, John Ashburner <[log in to unmask]>
>> wrote:
>>>
>>> A 2D affine transform would use 6 parameters, rather than 7: a 2x2
>>> matrix to encode rotations, zooms and shears, and a 2x1 vector for
>>> translations.
>>>
>>> Best regards,
>>> -John
>>>
>>> On 18 March 2011 05:05, Siddharth Srivastava <[log in to unmask]> wrote:
>>> > Dear list users,
>>> > I am currently trying to understand the implementation in
>>> > spm_affreg, and
>>> > to make matters simple, I tried to work out a 2D version of the
>>> > registration
>>> > procedure. I struck a roadblock, however...
>>> >
>>> > The function make_A creates part of the design matrix. For the 3D case,
>>> > the
>>> > 13 parameters
>>> > are encoded in columns of A1, which then is used to generate AA + A1'
>>> > A1. AA
>>> > is 13x13, and everything
>>> > works. For the 2D case, however, there will be 7+1 parameters( 2
>>> > translations, 1 rotation, 2 zoom, 2 shears and 1 intensity scaling).
>>> > The A1 matrix for 2D, then, will be (in make_A)
>>> > A = [x1.*dG1 x1.*dG2 ...
>>> > x2.*dG1 x2.*dG2 ...
>>> > dG1 dG2 t];
>>> >
>>> > which is (number of sampled points) x 7. The AA matrix (in function
>>> > costfun)
>>> > is 8x8
>>> > so, which is the parameter I am missing in modeling a 2D example? I
>>> > hope I
>>> > have got the number of parameters
>>> > right for the 2D case? Is there an extra column that I have to add to
>>> > A1 to
>>> > make the rest of the matrix computation
>>> > consistent?
>>> >
>>> > Thanks for the help
>>> >
>>
>
>
|