Mark Jenkinson wrote:
> Hi,
>
> Generally Flirt uses same the output datatype (i.e. bit-depth) as the
> input image.
> You can also force the output datatype using the -datatype option in flirt.
That's odd, here the input image was datatype 2, uint8, and the output
was datatype 16, float32. I didn't use the datatype option, and I can't
see anything in the fslconf script setting this either. Any idea why
FLIRT may have decided on float32 here? (all images are single-file
NIFTI, and so is FSLOUTPUTTYPE)
> In addition, you can use a range of specific avwmaths calls to change the
> datatype of any image (e.g. avwmaths_32R produced real, 32-bit output;
> avwmaths_16SI produces signed integer, 16-bit output; etc.)
I've tried
avwmaths_8UI image_as_int16 image_as_uint8
and
avwmaths_8UI image_as_int16 -div 256 image_as_uint8
but neither works, the former gives an image that looks like noise, the
latter gives an image with min/max of 0/0. I realise I am going in a
direction where information will be lost, but it should still be
possible, right?
I'd be very grateful for more specific info on how to convert between
datatypes.
Many thanks,
Ged.
>
> Hope this helps.
> All the best,
> Mark
>
>
>
> Ged Ridgway wrote:
>
>> Hi,
>>
>> Please can someone describe FSL/FLIRT's procedure for determining the
>> bit-depth of output images?
>>
>> I can't see anything in the FAQ about this, nor any options in FLIRT.
>> I have an example where registering a uint8 image to an int16
>> reference has resulted in float32 output -- is this to be expected? Is
>> it perhaps a consequence of my choosing sinc interp? (I have other
>> images registered to the same template which have resulted in 16 bit
>> output)
>>
>> Are there any tools in FSL which can change image bit-depth? It
>> appears avwchfiletype can't, though it seems like a useful option to
>> me, particularly going from e.g. uint8 to int16 or float32 to double,
>> when no information need be lost.
>>
>> Kind regards,
>>
>> Ged.
>
|