Print

Print


Hi,

I recently attended the FSL course, and went through the lectures and tutorials for FEAT lower-level processing and group-level analyses with my lab. We came up with several follow up questions. Any information, or references to where we can find more information, would be greatly appreciated!


  1.  What MNI resolution is recommended for functional registration? Does it matter what the resolution of your functional data is, or always 2mm (as used in tutorial) ?

We often use 2mm as the resolution of typical fmri data means that there is little benefit in going to a high resolution, e.g., 1mm.  If you have high resolution fmri data, or want to get the most accurate registrations then we actually recommend using a surface-based approach as is followed in the HCP project (see the paper on the minimal preprocessing pipeline, Glasser et al, for more info).


  1.  In the case that a usable high res scan is not acquired, is it still possible to perform registration of the functional scan directly to the template? If so, how would one do this in FEAT?

This is possible but in our experience the results are poor.  You would need to supply a template image that matched the contrast in the example_func and replace the MNI152_T1 template with this in the registration part of the FEAT GUI.  However, trying to cope with, in a one registration step, differences in both anatomy and signal loss, usually with a relatively low resolution, makes it extremely difficult to get results that are as good as what you can achieve by using a structural scan, in our experience.


  1.  In the tutorial, we discussed low-pass filtering for event-related designs. Is low-pass filtering also recommended for block designs?

We don't recommend using low-pass filtering for either designs in general.  We recommend using high-pass filtering, and we recommend that for all designs.


  1.  We often collect our functional images "oblique" angle to optimize signal in areas of drop-out (subcortical). Is there any extra step needed to de-oblique the functional data prior to registration , or does registration automatically take care of this?

No extra steps needed - registration should take care of this.  Just make sure you check that the results look good.


  1.  We came across the terms  global intensity normalization and grand mean scaling in the lectures/tutorial. Are these the same thing? Are these related to percent-signal change calculations, or are they something else?

Yes, these are the same thing. It ensures a consistent scaling between subjects, but more normalisation is normally done to get to percent-signal change (e.g., it is voxelwise, and also takes into account the EVs and contrasts in the model).


  1.  For group level analyses, what is the "negative variance problem", as it relaties to fMRI data? (mentioned in the lecture)

The variance at the group level is equal to the first-level variance (from the noise in the timeseries) plus the variance from the second-level (the between-subject biological variation). When a single estimation is done at the group-level of this combined variance (as OLS does) then it is possible that the _estimate_ of this variance ends up to be less than the first-level variance.  This implies that the second-level variance would be negative (which is impossible) and so this is the "negative variance problem" - that is, that estimates of the combined variance might end up being lower than the individual variances that make it up, simply because they are noisy estimates.  In FLAME (the higher-level stats used in FEAT) we explicitly setup a model whereby the combined variance _cannot_ be less than the first-level variance, thus avoid any negative variance issues.


  1.  For group level analyses, is there a way to test of homogenous variances (e.g. levene's test) before performing an independent samples t-test ? Or is it best to assume that you always have different variances when dealing with fMRI data ?

We do not use such a test, but it would be possible to implement something using the data from the first-level analysis.  It is certainly not always better to assume different variances.  It is only better if (a) there is a reason for it being different, and (b) you have sufficient data to make the separate estimates good enough.  The latter reason is particularly limiting in a lot of fMRI experiments, as using a small number of subjects to estimate variance can produce badly conditioned estimates that actually end up making the whole analysis less powerful than if a single, pooled variance was used instead.  It isn't easy to know exactly at what point this tradeoff is worth it, but in general variance estimates tend to need around 20 samples before they become relatively well conditioned, and so our rule of thumb would be to use separate variances only when you have 20 or more subjects in each group and that there is a reason to expect different variances (e.g., patients vs controls).


  1.  When looking at Design efficiency immediately following the model set-up (level 1), we get reasonable % ranges (e.g. 0-3%) However, when we click "estimate from the data" on the MISC page, and then re-run the Design efficiency on the model, we get values that are extremely high (e.g. 20,000,000%). Does this mean there is way too much error in our data, or, are we using this tool incorrectly? When FSL is estimating the Design efficiency without clicking on "estimate from data", what default values is it drawing from?

There was a bug in the calculation in one of the older versions of FSL. I believe it is fixed in the latest version.  What version are you running?  This is not a problem with your data or how you are using things.  The default values using for the design efficiency are shown in the misc tab (prior to you pressing "estimate from data").


  1.  If you have one subject with significant signal drop-out in their functional scan, and then include this subject at the group-level, will the group-level test exclude those voxels which have missing data? Another way to ask this question: does the group level analysis mask out any voxels that don't have data for every subject?

This will depend on the extent of the dropout.  You can check this by looking at the mask that is generated within each subject's FEAT directory, or at the combined mask, which is reported in the group FEAT results.

All the best,
Mark



Thanks!!
Michelle
--
Michelle VanTieghem
PhD student in Psychology
Developmental Affective Neuroscience Lab
Columbia University
[log in to unmask]<mailto:[log in to unmask]>