Mark, Thank you, as always! Great ideas. My goal: I have run *run first* to extract the hippocampus from a non-betted MPRAGE. I want to extract percent signal change from the segmented hippocampus. Featquery requires the mask to be in standard-space, highres-space or lowres-space. (Note that the segmented hippocampus is in the following space: 160x192x144 voxels with 1.3x1.3x1.3mm resolution.) According to the featquery website: "Featquery will automatically detect which space this mask is in (standard-space, highres-space or lowres-space) and will transform it into the native lowres space of example_func; of course this can only work if FEAT registration was setup and carried out." I understand that only the non-betted whole brain MPRAGE can be registered to standard space -- before the hippocampus is extracted. Unfortunately, I do not see the saved registration file (i.e., the *to_std* file) outputted as a result of the run_first command (i.e., the *to_std* image). In addition, I would prefer not to run the *run_first_all* command, given a few registration issues when I tried to run this command before. Should I run *run_first* from scratch for the hippocampus after I register each subject's non-betted MPRAGE to the MNI_T1_2mm_brain? What is the most efficient way to do achieve my goal? I would really appreciate your suggestions about the best way to proceed. Thank you very much - I apologize for the delayed follow-up. On Mon, Nov 30, 2009 at 2:24 AM, Mark Jenkinson <[log in to unmask]> wrote: > Hello, > > > 1. To follow-up on "spaces," how does one know the appropriate display >> range in FSLview for the different images? >> > > Look at the intensities that are contained in the image. Either click > around different voxels in FSLView and note the values in the > "intensity" box (near the coordinate displays) or use a tool like > "fslstats" with the -r (or -R) option. When you have a feeling for > the range of intensity values, just set the display range (min and max) > in FSLView to cover this range (and not much more). > > > >> 2. To follow up on the registration -- >> >> a. Is it possible to register a FIRST hippcampus (already >> segmented & boundary corrected) to standard space? >> e.g., >> >> flirt -in [FIRST hippocampus; segmented & boundary corrected] -ref >> [MNI152_T1_2mm] -out [name] ? >> > > You should not do this. You should only register images that look > like each other (e.g. the image of a whole brain and another image > of a whole brain, not of one isolated structure). However, if you have > run FIRST then the registration has already been done and saved > in a file called something like *_to_std_sub.mat > You can *apply* this transformation (the result of a registration) to > any image in the original space using the -applyxfm flag in FLIRT. > > > What is the appropriate display range for this image? >> > > For a labeled image it will depend on the structure with the maximum > label number. Typically if you set the range to be 0 to 40 then you > should see things fine. But use the strategy I describe above to know > for sure. > > > b. Or must one perform the flirt registration to standard space >> from the raw structural image with no brain extraction? And then use FIRST >> to segment the desired hippocampus? >> > > If you use run_first_all then this is all done for you. And yes, it is > recommended > to use non-brain-extracted images with this. > > All the best, > Mark > > > > > >> Thank you. Very much. >> >> >> On Sat, Nov 28, 2009 at 3:35 PM, Mark Jenkinson <[log in to unmask]> >> wrote: >> Hello, >> >> There is no real reference about "spaces" besides the documentation. >> I'll try to answer your questions: >> >> 1 - Outputs are in different spaces as the originally acquired images >> are in different spaces. That is, the Field Of View (FOV) and >> resolution are different for the functional, structural and >> standard/template images. We try to keep things in the most >> appropriate space, partly because moving between spaces >> involves registration and resampling steps which can be inaccurate >> and the interpolation (in the resampling) will degrade the image >> quality by some amount. >> >> 2 - As examples: >> Functional images might be 64x64x40 voxels with 3x3x4mm resolution >> Structural images might be 256x256x200 voxels with 1x1x1mm >> resolution >> Our standard space template images are 91x109x91 voxels with 2x2x2mm >> resolution >> >> 3 - The spaces correspond to your different acquisition images - so just >> check >> by comparison with your original images. >> >> For your registration I'm not sure what your [FIRST INPUT] image is. Is >> it a >> structural image with no brain extraction? If so, then you should not >> register >> to the avg152T1_brain but use the non-brain extracted version. Also, we >> recommend sticking with the MNI152 naming, so MNI152_T1_2mm would >> be the appropriate image in this case. I am not sure what you mean by >> "appropriate brain region". Also, check that the display range is set >> appropriately >> in FSLView, as having this range set badly (which sometimes happens >> with the default settings) could make the whole image look white. >> >> All the best, >> Mark >> >> >> >> On 28 Nov 2009, at 17:10, ACE . wrote: >> >> Hello. >> >> Could you please point me to a resource about different "brain space" >> options? (e.g., high resolution space; standard space; native space?) >> >> I have read the registration information on the FSL website, but I have >> additional questions such as the following: >> >> 1. Why are different FSL outputs in different spaces? >> >> 2. Could you give me an example of a FSL image in each "space?" >> >> 3. What is the best way to determine an image's "space?" Is there a list >> of which voxel dimensions signify which "space?" >> >> To register a FIRST brain area (with boundary correction) to standard >> space -- I tried the following command: >> >> flirt -in [FIRST INPUT] -ref avg152T1_brain -out [name] >> >> But I do not see the appropriate brain region in FSLview when I open the >> output. I see "white" throughout the entire brain. Any thoughts? >> >> THANK YOU. >> >> >> >>