Dear Arthur,
I'm afraid that for your images I doubt that there will be a consistent,
reliable and robust solution from FLIRT. We really need at least one
image to have good anatomical signal and minimal artifact.
In general, the smaller the starting range the more densely it samples
the cost function in this region. But for your images I suspect that
the
cost function is so noisy and full of minima that you won't be able to
get a good, reliable, robust registration regardless.
All the best,
Mark
On 24 Mar 2006, at 09:30, Arthur Mikhno wrote:
> Thanks Mark,
>
> This is really great... I appreciate that you took the time to look
> at this problem for me.
>
> I was wondering if you can comment on why results in the same
> platform vary when choosing different starting angles.
> Rephrased: On the SAME platform, should you expect the same result
> if choosing different search angles? What search angle is most
> reliable.
>
> Thanks,
> Arthur
>
>
> On Mar 24, 2006, at 12:15 AM, Mark Jenkinson wrote:
>
>> Dear Arthur,
>>
>> I've just had a look at your images and now I understand why the
>> results
>> are so variable. I'm no expert on non-MR data, but I assume that
>> these
>> are PET or SPECT images. They have very poor quality anatomy, have
>> significant reconstruction artifact (radial streams) and also seem
>> to have
>> some artifacts in the inferior portions (large black areas).
>> I've put some example gif images of these volumes at:
>> http://www.fmrib.ox.ac.uk/~mark/files/fsl/
>> so that you can confirm that these are the images that you are
>> dealing
>> with and that the transfer did not corrupt them.
>>
>> Given these images I would expect that flirt would struggle to
>> find a good
>> and stable registration (a good, strong minima in the cost
>> function) and so
>> this explains why you see your variations of up to 2mm. I'm
>> afraid that
>> flirt and fsl are tuned for MR data. One thing which might help
>> is to
>> do brain extraction (which we recommend anyway, as a matter of
>> course -
>> see the FSL FAQ for details of this and other registration
>> recommendations).
>>
>> Unfortunately, BET does a poor job on these images (again, it is
>> developed
>> for MR images) so you will have to find some other way. However,
>> it is
>> worth doing if you want to try to get good registrations, as the
>> background
>> of these images is very strong and contains a lot of structure
>> which will
>> make the registration more biased and less stable while it is
>> included.
>>
>> Sorry I do not have better news, but it does explain why we do not
>> normally
>> see such large variations in registration between platforms.
>>
>> Best of luck,
>> Mark
>>
>>
>> On 23 Mar 2006, at 09:44, Arthur Mikhno wrote:
>>
>>> Thanks Mark,
>>>
>>> I have uploaded the files to your server with the number 915331.
>>> I named the zip file that as well. The input image for
>>> coregistration was: NHBTA002.15.FI.hdr and the reference image
>>> was NHBTA002.08.FI.hdr.
>>>
>>> Also, I made a mistake about the variation in degrees, I was eye
>>> balling it. The actual differences are up to .6 degrees but are
>>> also clearly visible visually. This is because .6 degrees can
>>> translate in to a motion of 2+ mm in an image that contains 256,
>>> 1mm voxels. Translation is also an issue.
>>>
>>> In the zip file there are two folders:
>>> Originals: Contains two images with numbers 15 and 08. In my
>>> tests I coregistered 15 to 8 using the following parameters:
>>> Rigd 6 Parameters, mutualinfo, trilinear... and the various
>>> search angles.
>>>
>>> Results: Contains the resulting images of all the test runs.
>>> Anything that has a .1. and a .2. in the file name is the first
>>> and second run. Also I included the .mat files for all the runs.
>>>
>>> ---
>>> Here is a summary of the rotation angles and translations that
>>> are a result of the coregistration for all the test runs.
>>> *Note: I used flirt 5.3 for Solaris and Mac and flirt 5.2 for
>>> the Linux
>>> *The results are more pronounced in degrees where there is a
>>> difference in up o .6 degrees in some runs.
>>>
>>>
>>> Rotations (in radians):
>>> Linux (gaba)
>>> -0.0104 -0.0035 0
>>> -0.0067 0.0020 -0.0014
>>> -0.0067 0.0020 -0.0014
>>> -0.0009 0.0007 -0.0016
>>> -0.0009 0.0007 -0.0016
>>>
>>> Unix, Solaris 10 (hal)
>>> -0.0021 -0.0011 0
>>> -0.0122 0.0005 -0.0008
>>> -0.0122 0.0005 -0.0008
>>> -0.0030 -0.0008 -0.0002
>>> -0.0030 -0.0008 -0.0002
>>>
>>> Mac OS 10.5
>>> -0.0138 0.0003 -0.0014
>>> 0.0001 -0.0010 -0.0051
>>> -0.0021 -0.0010 -0.0000
>>> -0.0021 -0.0010 -0.0000
>>> -0.0138 -0.0005 0
>>> -0.0138 -0.0005 0
>>> -0.0139 0.0002 -0.0002
>>>
>>>
>>> Translations (in mm):
>>> Linux (gaba)
>>> -0.1184 1.0070 -0.2453
>>> 0.2208 0.3916 -0.7737
>>> 0.2208 0.3916 -0.7737
>>> 0.1655 0.0346 -0.2972
>>> 0.1655 0.0346 -0.2972
>>>
>>> Unix, Solaris (hal)
>>> -0.0854 0.1305 -0.0691
>>> 0.1571 1.2035 -0.3944
>>> 0.1571 1.2035 -0.3944
>>> 0.0028 0.1201 -0.2275
>>> 0.0028 0.1201 -0.2275
>>>
>>> Mac OS 10.5
>>> 0.2756 1.2033 -0.6820
>>> -0.1728 0.5083 2.1885
>>> -0.0486 0.1598 -0.1579
>>> -0.0486 0.1598 -0.1579
>>> -0.0217 1.3398 -0.6298
>>> -0.0217 1.3398 -0.6298
>>> 0.0470 1.2903 -0.6842
>>>
>>> Thanks again for any help.
>>>
>>> Arthur
>>>
>>>
>>> On Mar 22, 2006, at 11:58 PM, Mark Jenkinson wrote:
>>>
>>>> Dear Arthur,
>>>>
>>>> We normally see some variation across platforms due to the fact
>>>> that the implementation of the underlying maths can vary (e.g.
>>>> floats
>>>> versus doubles for certain library functions that we use) but it is
>>>> very unusual to see such large variations.
>>>>
>>>> For example, registering example_func to structural_brain from
>>>> the FEEDS data set gave me the following on two platforms.
>>>>
>>>> Linux:
>>>> 0.998744 0.007186 -0.049590 2.220527
>>>> -0.005183 0.999170 0.040397 -0.558773
>>>> 0.049839 -0.040089 0.997952 83.827855
>>>> 0.000000 0.000000 0.000000 1.000000
>>>>
>>>> Mac:
>>>> 0.998743 0.007158 -0.049610 2.224911
>>>> -0.005156 0.999171 0.040373 -0.560217
>>>> 0.049858 -0.040066 0.997952 83.824333
>>>> 0.000000 0.000000 0.000000 1.000000
>>>>
>>>> You can see that the differences here are very small - much
>>>> smaller than you seem to be getting. Also, I have never come
>>>> across a variation between runs on the same system! If the
>>>> inputs and options to flirt are the same the results will be
>>>> the same as it is completely deterministic.
>>>>
>>>> Therefore, I think the best way to see what is happening here is if
>>>> we could try this with your data. Could you please upload it to
>>>> us using:
>>>> http://www.fmrib.ox.ac.uk/cgi-bin/upload.cgi
>>>>
>>>> Don't forget to send us the ID number of the upload.
>>>>
>>>> All the best,
>>>> Mark
>>>>
>>>>
>>>>
>>>> On 22 Mar 2006, at 10:00, Arthur Mikhno wrote:
>>>>
>>>>> Hey All,
>>>>>
>>>>> Need HELP reproducing results (coregistrations) on multiple
>>>>> platforms. Why are coregistrations of the same images using
>>>>> same parameters across platforms not the same?
>>>>>
>>>>> I developing applications using FLIRT for use on multiple
>>>>> platforms (Sun Solaris 10, Linux Redhat, Mac OSX 10.5).
>>>>> Using the same options in coregistrations the results on all
>>>>> systems are different. This is a summary of what options I used
>>>>> and the results. I basically just varied the search angle in
>>>>> all my tests.
>>>>>
>>>>> FLIRT:: Cost: mutualinfo Search: -180 180 Interp: Trilinear
>>>>> Coregistration on and across all systems varied by a rotation
>>>>> of up to 2 degrees and/or translation of several voxels.
>>>>> (Min Max Values or Images are DIFFERENT... large visible
>>>>> differences).
>>>>> Results could not be replicated on any system twice.
>>>>> Each time I ran flirt on any given system I would get a
>>>>> different result than the previous time. This occurred even on
>>>>> the same operating system.
>>>>>
>>>>> FLIRT:: Cost: mutualinfo Search: -90 90 Interp: Trilinear
>>>>> Coregistrations can be replicated any given system. (Min Max
>>>>> Values of images are the same.. no visible difference)
>>>>> Coregistrations on and across all systems varied by a rotation
>>>>> of up to 2 degrees and/or translation of several voxels.
>>>>>
>>>>> FLIRT:: Cost: mutualinfo Search: -30 30 Interp: Trilinear
>>>>> Coregistrations can be replicated any given system. (Min Max
>>>>> Values of images are the same.. no visible difference)
>>>>> Coregistrations on and across all systems varied by a rotation
>>>>> of up to 2 degrees and/or translation of several voxels.
>>>>> Coregistrations on Solaris and Linux were more similar to each
>>>>> other then the mac.
>>>>>
>>>>> FEEDS TESTS RESULTS:
>>>>> FLIRT:
>>>>> Solaris: 0.0%
>>>>> Linux: .3%
>>>>> Mac: .33%
>>>>>
>>>>> BET:
>>>>> Solaris: 0.0
>>>>> Linux: 0.0
>>>>> Mac: 0.0
>>>>>
>>>>> Conclusion:
>>>>> For coregistration to be reproducible a search angle smaller
>>>>> than 90 and preferably smaller then 45 should be chosen.
>>>>> Results across platforms do not seem to be reproducible.
>>>>>
>>>>> Does anyone know what can be causing this or how to get around
>>>>> this problem? I need to make sure that what I run on the MAC
>>>>> and on Linux is the same. How can I figure out which
>>>>> coregistration is most accurate?
>>>>>
>>>>> Please let me know if I did not provide enough information.
>>>>>
>>>>> Thanks,
>>>>> Arthur
|