Dear Mark,
thanks so much for your super fast responses! Your help is greatly
appreciated by an FSL/structural data processing newbie like me :)
To make sure I understand, would you mean to do the segmentation in native
space, and then use FLIRT and an estimated affine matrix to bring the tissue
images to MNI space, e.g. using the command:
flirt -in tissue_image -ref refvol -out tissue_image_in_MNI -init
invol2refvol.mat -applyxfm?
On another note:
I am also interested in using priors with FAST to get a better tissue map
for basal ganglia, using the -a -P options. Would you have any comments on
the use of priors? (I trace structures on images transformed to MNI space,
and want the segmented tissue images in MNI space to feed them to a machine
learning program, along with the traced structures, to facilitate automatic
recognition).
Briefly, these are the steps I am thinking of following - please do let me
know if I do something stupid or if you have any other comments:
1. Feed axial Analyze T1s to a series of BETs to get brains (I have found
that using a two stage procedure, with [i] -S -f 0.4 and then [ii] -f 0.4
works pretty well in most cases in totally preserving the brain while
removing all the rest).
2. Use FLIRT (9 dof) to get an affine matrix to MNI space.
3. Do FAST: [a] to get a bias-corrected brain image
3. Use FLIRT to get the bias corrected brain to MNI space (the idea being
that registration may work better with bias corrected images) AND to obtain
an affine matrix.
4. Do FAST to get segmented tissue in native space, using priors -a -P, with
the affine matrix from [3].
5. Use FLIRT with segmented tissue images from [4], and apply the affine
matrix from [3] to get the segmented tissue to MNI space too.
Thanks so much for your help!
Best wishes,
Yannis
On Wed, 12 Aug 2009 19:51:41 +0100, Mark Jenkinson <[log in to unmask]> wrote:
>Hi,
>
>I think you've answered your own question.
>It is step (e) which is the problematic one.
>
>We always recommend doing segmentation in the native image
>space, as transforming to another space involves interpolation
>which blurs the intensities, making the distinctions between
>tissues less clear and the histogram less well defined.
>So just avoid the resampling, do your segmentation in the
>native space and resample your resulting segmented images
>if you want them in a different space.
>
>All the best,
> Mark
>
>
>On 12 Aug 2009, at 19:28, Yannis Paloyelis wrote:
>
>> Dear FSL users,
>>
>> Problems galore!
>>
>> I get FAST segmentation problems. When I specify:
>>
>> [1]fast -g -b -B -o output_image -p input_image (input_image=a
>> standardised to
>> MNI, brain only, T1.nii.gz), I get the message:
>> Exception: Not enough classes detected to init KMeans.
>>
>> Following from this, when I specify:
>> [P1] fast -g -a (matrix from FLIRT) -b -B -o output_image -v -p
>> input_image
>> (using prior to initialise parameter estimation) OR
>> [P2] fast -g -a (matrix from FLIRT) -b -B -o output_image -P -v -p
>> input_image (using priors throughout)
>>
>> I get the problematic images I have attached. I have checked the input
>> images and they are fine (previous steps: (a)ANALYZE T1(original) ->
>> (b)NIFTI-> (c)fslswapsim-> (d) bet-> (e)FLIRT). I get the same
>> exception
>> message even when, regarding the input image, I omit steps (c), or
>> even
>> steps (b) AND (c).
>>
>> HOWEVER, command [1] works fine (and [P1] has done so previously)
>> when the
>> input image has not been through FLIRT (i.e. step (e) was omitted,
>> and the
>> input image to FAST is not registered to MNI).
>>
>> Any ideas of what I may be doing wrong?
>>
>> Thanks so much for your help!
>>
>> Best,
>> Yannis
>> <X_gm_stdspace.jpg.zip><prob_1.jpg.zip><P2.jpg.zip>
|