Can I add my question to this thread? I'm having a similar problem: trying to get FNIRT
to run faster without sacrificing quality.
I'm using FNIRT to register an FSE T2-weighted image to an EPI T2-weighted image. The
EPI image is not diffusion-weighted, so its contrast characteristics are pretty similar to
that of the FSE image. The EPI image is a little bit warped by susceptibility artifacts, and
I'm trying to use FNIRT to apply a similar warping to the FSE image.
I got great results using FNIRT's default settings, adding --intmod=global_non_linear. But
execution time was about 30-40 minutes on my machine, and it would really help me if I
could cut that down substantially.
I have cropped down my reference image, and that helped a bit. I tried --
subsamp=8,4,2,2, but that produced this error message when I ran FNIRT:
New Lambda: 240
New FWHM (mm) for --ref: 4
New FWHM (mm) for --in: 6
New Matrix Size: 33 33 3
New Voxel Size: 6.875 6.875 48
Error occurred when preparing to fnirt
Exception thrown with message: St9bad_alloc
Using --numproc=float, --splineorder=2, and --intmod=global_linear all at once produced
time savings that were nice, but not really impressive, and the resulting fit wasn't nearly
as good. I'm now in the process of restoring settings back to their defaults, to see which
one messed up my fit. But that'll just make execution time longer.
I'd really like to achieve the 60% time savings that I can supposedly get by fixing that --
subsamp bug. Any suggestions?
For what it's worth, both my input and reference images have 22 cm FOV, 256x256
matrix, 6 mm slice thickness (although I have cropped down the reference image).
Thanks,
Bill
|