Hi,
If you want an output image then you need to supply an output name to the fslreorient2std tool.
Similarly, you need to save the output ROI (using the -r option) from the robustfov tool (and take as input the image that was the output of fslreorient2std).
Then use the ROI output as the input image for BET as usual.
All the best,
Mark
On 23 Jan 2014, at 10:00, Elijah Mak <[log in to unmask]> wrote:
> Hi Mark,
>
> Thank you for the suggestion. I am wondering if I am running it correctly though.
>
> I ran fslorient2std and robustfov on the T1 image with the following command and output:
>
> fslreorient2std D001_3D_A
>
> 1 0 0 0
> 0 1 0 0
> 0 0 1 0
> 0 0 0 1
>
> robustfov -i D001_3D_A
> Final FOV is:
> 0.000000 180.000000 0.000000 240.000000 69.000000 170.000000
>
>
> Both processes took only a few seconds to be completed. Is this normal?
>
> Also, the BET is still showing a lot of neck. I have uploaded the screenshot on Dropbox at https://www.dropbox.com/s/2ud4rup1tiyjiu0/Screenshot%202014-01-23%2009.55.20.png
>
> Having looked at all of the data, this error seems rather consistent. Inclusion of brain looks generally okay, but most of the errors concern the inclusion of the neck.
>
> I was reading up about the fsl_anat script which includes bias correction amongst other processing techniques. Should I run my images through this pipeline before going to SIENA?
>
> Many thanks for your help!
> Best Wishes,
> Elijah
>
>
>
>
>
>
|