Print

Print


Hi Mark,

Thanks for all your help. I think that actually all makes sense. Have a great day!

Best,
Jon

On Apr 14, 2014, at 3:00 AM, Mark Jenkinson <[log in to unmask]> wrote:

> Dear Jon,
> 
> To start with, you won't be able to use .mat files in a call to fslmaths, as fslmaths only deals with images and is not the tool that we use for transforming images between spaces.  Whether it is necessary to transform things depends on whether fsl_anat cropped your images or not.  If it did not, then you do not need to transform if you've used the same input for fsl_anat and vbm.  However, if fsl_anat did crop your images (and changed the FOV) then it is necessary to transform the relevant mask from the original space (uncropped) into the space of your bias corrected image (cropped).  So whether you can skip these steps or not really depends on whether this cropping has occurred.  You can tell this by either using fslsize or just trying to load the original vbm brain mask and the fsl_anat bias corrected image into fslview together (as fslview will only allow this if they have the same FOV, and if that is the case then there was no cropping).
> 
> If there was no cropping then you just need to apply the masks to the bias corrected images using fslmaths (with the -mas or -mul options, as both will do the same thing if given a binary mask).
> 
> If there was cropping, but you've used exactly the same input to fsl_anat as you did in your VBM, then you can avoid doing the registration and use the pre-stored matrix instead.  I gave the registration prescription before as it would have worked in either case (cropping or not).  The pre-stored matrix will be T1_orig2roi.mat, and you can apply such a transform using either flirt, the ApplyXFM gui, or applywarp.  After this you need to apply the threshold and binarisation in fslmaths, then apply this to the bias corrected image (using -mas or -mul, as above).
> 
> If you are not confident with any the steps above then I recommend that you look at the registration lecture and practical in the FSL Course material.
> 
> All the best,
> Mark
> 
> 
> 
> 
> On 11 Apr 2014, at 23:16, Jonathan Siegel <[log in to unmask]> wrote:
> 
>> Hi Mark,
>> 
>> Thanks for the response, but unfortunately I’m still majorly confused about how to do this since it’s my first project using FSL and I’ve been trying to teach myself how to do everything. To clarify, I’ve done the analyses using the standard VBM processing pipeline and I was just curious how it would look if I used the fsl_anat processing pipeline on the same images. The struc.nii.gz images from the fslvbm processing and the fsl_anat processing are the same exact files (since they’re just the original anatomical scans). I thought that the fsl_anat script outputs a bunch of .mat files in order to transform the images back and forth between spaces. Can I not just take the biascorr_brain.nii.gz files and use fslmaths -mul T1_std2orig.mat and then take that output image and use the fslmaths -mul T1_roi2orig.mat in order to get the biascorr_brain.nii.gz images into the format that’s necessary to feed into the fslvbm_2_template script?
>> 
>> Instead, if I’m trying to follow your directions then I’d register the structural images together (which are the same structural images- so I’m guessing I can skip this step?), but I’m not sure exactly how to do this. Then you said to transform the brain_mask.nii.gz from the original image space to the space of the biascorr_brain? Again, I have no idea how to do this. Assuming I can get to this point, then I threshold and binarise the results using the 'fslmaths -thr .5 -bin’ command, which will then give everything above .5 a value of 1 and everything under .5 a value of 0? After this step, I should then multiply the brain_mask files, which are thresholded and binarised, with the biascorr_brain files to get the new brain extracted images (ie. ‘brain_mask.nii.gz fslmaths -mul biascorr_brain.nii.gz’)? I’m sorry that I’m so lost on this process, but any help you can give me would be greatly appreciated.
>> 
>> Thanks,
>> Jon
>> 
>> 
>> 
>> On Apr 10, 2014, at 1:36 PM, Mark Jenkinson <[log in to unmask]> wrote:
>> 
>>> Hi,
>>> 
>>> If you want to use the same brain masks that you've already got, then just register the structural images together (the ones you used before and the new biascorr ones) then use that registration to transform the brain *mask* from the original image space to the space of the biascorr images (assuming it is different, otherwise you can skip this step).  You will need to threshold and binarise the results, but a threshold of 0.5 and binarisation (fslmaths with -thr 0.5 -bin) should do the trick.  You can then multiple the mask with the biascorr image to get your new brain extracted image, and then use these in vbm (replacing all the required brain extracted and brain mask images in the output directory of stage 1).
>>> 
>>> All the best,
>>> Mark
>>> 
>>> 
>>> On 9 Apr 2014, at 01:00, Jonathan Siegel <[log in to unmask]> wrote:
>>> 
>>>> Hey Mark,
>>>> 
>>>> Sorry to bother you again, but I tried feeding the biascorr_brain images to the fslvbm_1_bet command and unfortunately it does an extremely poor job of the brain extraction (regardless of the parameters/flags that I use). Is there any way to use any of the .mat files in order to transform the biascorr_brain images into a format that fslvbm_2_template will take? Otherwise I’d have to manually clean up the brain extracted biascorr_brain images, which would likely take just as long as the manual clean-up of the brain extracted original anatomical scans. Thanks again for your help.
>>>> 
>>>> Best,
>>>> Jon
>>>>  
>>>> On Apr 8, 2014, at 2:35 AM, Mark Jenkinson <[log in to unmask]> wrote:
>>>> 
>>>>> Hi,
>>>>> 
>>>>> I meant to feed them in at the very start - as inputs to stage 1.
>>>>> 
>>>>> All the best,
>>>>> Mark
>>>>> 
>>>>> 
>>>>> On 8 Apr 2014, at 07:32, Jts67 <[log in to unmask]>
>>>>>  wrote:
>>>>> 
>>>>>> Hi Mark,
>>>>>> Thanks for the response. I do indeed have 1's in the second column and perhaps I should've just copied part of my design matrix instead. 
>>>>>> As for using the fsl_anat biascorr_brain images as the initial inputs for VBM, it spits out an error when I try to do so. I assumed I need to use one of those .mat files to get those images back into a space/format that fslvbm_2_template likes, or are you saying I just feed them into the fslvbm_1_bet stage and then proceed with the rest of the vbm processing pipeline?
>>>>>> Best,
>>>>>> Jon
>>>>>> 
>>>>>> On April 8, 2014 2:15:22 AM EDT, Mark Jenkinson <[log in to unmask]> wrote:
>>>>>> Hi Jon,
>>>>>> 
>>>>>> I was assuming that the copied lines were only from the beginning of the design matrix and that there would be a set of 1's later on in the second column.  If not, then Mike makes an important point here.
>>>>>> 
>>>>>> As for your other questions:
>>>>>> 
>>>>>>  - You can either do two analyses, or put it all together into a single one.  There can be some advantages with DOF by keeping everything together in one analysis.
>>>>>> 
>>>>>>  - If you want to use the fsl_anat bias corrected results, then these should be included as the initial inputs into VBM.
>>>>>> 
>>>>>> All the best,
>>>>>> Mark
>>>>>> 
>>>>>> 
>>>>>> On 5 Apr 2014, at 13:28, Michael Dwyer <[log in to unmask]> wrote:
>>>>>> 
>>>>>>> Hi Jon,
>>>>>>> 
>>>>>>> Maybe it's just copying issue or I'm misunderstanding something, but if your design matrix is only two groups, with EV1 as Old and EV2 as Young, then where one EV is 0, the other should be 1. It looks like your EV2 is 0 for all cases, half of which are also 0 in EV1.
>>>>>>> 
>>>>>>> Best,
>>>>>>> Mike
>>>>>>> 
>>>>>>> 
>>>>>>> On Sat, Apr 5, 2014 at 7:31 AM, Jon Siegel <[log in to unmask]> wrote:
>>>>>>> Hi Mark,
>>>>>>> Thank you so much for your response. I just pulled some arbitrary numbers, but you’re right that the ones I posted do seem like they’re centered around 1 (this isn’t the case though). It looks like I need to run two separate randomise commands, the first initially looking to see if local GM volume has a linear relationship with both groups and the second looking to see within each group (since it’s possible that GM volume predicts memory performance differently for young and older adults). I appreciate your help tremendously.
>>>>>>> On a side note, I also used the fsl_anat script to just test it out and it spits out some biascorr_brian.nii.gz files, but is there a way to transform them back so that I could then run them through the traditional VBM pipeline (creating my study-specific template) and then into the fslvbm_3_proc stage? It seems like the fsl_anat has done an excellent job cleaning up the brains and I’m curious to see if the results are similar using that pipeline instead. Thanks again for the great advice.
>>>>>>> Best,
>>>>>>> Jon
>>>>>>> 
>>>>>>> On Apr 5, 2014, at 3:02 AM, Mark Jenkinson <[log in to unmask]> wrote:
>>>>>>> 
>>>>>>> > Dear Jonathan,
>>>>>>> >
>>>>>>> > Firstly, although you've "centered" your ICV values, they look like they are centered about 1.0, and not about 0.  For the GLM, it assumes a linear relationship (not a multiplicative one) and so the ICV values need to be centred around 0.  This is also true for all other covariates that you want to add to the model.  With your current model there is quite a lot of shared signal between the mean of both groups and the ICV, making the results difficult to interpret cleanly.
>>>>>>> >
>>>>>>> > To add a new covariate to the model then you can put it as any EV (the order does not matter as they are all fitted together, not one at a time).  The key is to associate the non-zero entries in the contrast with the appropriate EV.  So if you keep EV3 as the ICV (but zero centred) and add your extra covariate (also zero centred) as EV4, then you can have contrasts such as [0 0 0 1] and [0 0 0 -1] that test for linear relationships (of appropriate signs) between your new covariate and the local GM volume.  This test would be across both groups if you did it like this.
>>>>>>> >
>>>>>>> > If you wanted to see if there was a separate relationship in each group then you need to split this new covariate across two EVs - that is, EV4 and EV5 - where EV4 would have zeros for one group and the demeaned (i.e. centred) covariate values for one group and zeros for the other group, and EV5 would be the other way around (zeros for the opposite group ...).  The normal recommendation for demeaning (centering) in this situation would be to take all the covariate values for this quantity and demean prior to splitting them into the two subgroups.  This would then do the conservative test and account for any potential differences in the mean values of the covariate between groups, splitting any group difference in local GM volume between the group EVs (EV1 and EV2) and the new EVs.  However, if you have a strong prior argument that any difference in local GM volume could not be attributed to group differences in the mean of your covariate value, then you can demean within each group separately, but be warned that this is a very strong assumption and you would need to have very good reasons and evidence to choose to do the analysis this way.  The safer, and generally recommended, alternative is to demean across both groups together (as a single set of values) and then split the demeaned values into the separate EVs.
>>>>>>> >
>>>>>>> > For more details on demeaning and centering, see the FSL wiki page on GLM, as well as Jeanette Mumford's pages (there is a link from the FSL wiki) and also the FSL Course slides (the third fMRI lecture - Advanced Topics - is where this material is).
>>>>>>> >
>>>>>>> > All the best,
>>>>>>> >       Mark
>>>>>>> >
>>>>>>> >
>>>>>>> >
>>>>>>> > On 2 Apr 2014, at 20:15, Jon Siegel <[log in to unmask]> wrote:
>>>>>>> >
>>>>>>> >> Hi FSL Experts,
>>>>>>> >> I've searched through the message board and the archives and it seems like there's quite a lot of questions about the design files for contrasts and matrix. I just wanted to make sure that I'm setting things up correctly for my experiment.
>>>>>>> >> I have 2 separate groups (Older adults and Young adults; n=18 and n=21 respectively) that I'm attempting to perform VBM on, but beyond the significant volumetric differences between the two groups, I'd like to know how to create the contrasts for testing the correlation/association of these specific brain regions with memory performance measure(s). I've been able to run the whole-brain analyses with controlling for ICV by creating the following matrix:
>>>>>>> >>
>>>>>>> >> EV1= Old (OA), EV2= Young (YA), EV3= ICV (scaling factor centered)
>>>>>>> >> 1       0     1.2345
>>>>>>> >> 1       0       .8652
>>>>>>> >> 1       0       .6587
>>>>>>> >> 0       0      1.1365
>>>>>>> >> 0       0       .9874
>>>>>>> >> 0       0       1.3245
>>>>>>> >>
>>>>>>> >> I then created the following contrasts based off the two group difference adjusted for covariate:
>>>>>>> >> c1 = 1   -1   0  --> Old > YA
>>>>>>> >> c2=  -1   1   0  --> YA > OA
>>>>>>> >> c3=  0    0   1  --> Pos effect of ICV/scaling factor
>>>>>>> >> c4=  0    0   -1 --> Neg effect of ICV/scaling factor
>>>>>>> >> Are these setup correctly?
>>>>>>> >>
>>>>>>> >> If so, I'd now like to modify the analyses and investigate whether these significant differences in brain volume between OA and YA are also associated with one or more of our behavioral performance measures (ie, Memory interference). From what I've read, I can just add another EV (Proactive Interference score centered) of interest into the GLM, but how exactly do I setup the contrasts? I assume that I'd want to put the behavioral variable of interest into the GLM before controlling for any nuisance variables, which would then make EV3= memory score centered and EV4= ICV. But again, I'm confused at how to setup the contrasts.
>>>>>>> >>
>>>>>>> >> For your reference, I'd like to look and see whether there is a correlation/association between GM volume and memory performance overall, as well as for each group separately. Any help you can give me would be greatly appreciated. Thank you so much for your time and please let me know if you need any additional information.
>>>>>>> >>
>>>>>>> >> Best Regards,
>>>>>>> >> Jonathan Siegel
>>>>>>> >>
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> -- 
>>>>>>> Michael G. Dwyer, Ph.D.
>>>>>>> Assistant Professor of Neurology
>>>>>>> Director of Technical Imaging Development
>>>>>>> Buffalo Neuroimaging Analysis Center
>>>>>>> University at Buffalo
>>>>>>> 100 High St. Buffalo NY 14203
>>>>>>> [log in to unmask]
>>>>>>> (716) 859-7065
>>>>>> 
>>>>> 
>>>> 
>>> 
>> 
>