Hi,

Thanks so much. I really appreciate your help. My lab mostly uses SPM so FSL is totally new territory for us. I may have been confused the "required effect" number in the .con file. Is the required effect number the same as the t-stat? For some reason I am getting a really high required effect (3.879). I have 24 participants.

Also, I can't find any documentation on what type of FWE is used. It's obvious from my files that the FWE is getting rid of A LOT of stuff. I'm also wondering what exactly TFCE does. Is it an enhancement or some type of multiple comparisons correction? If it's enhancement, why do my clusters look smaller in my tfce p image than in the raw tstat image? Also, if the TFCE essentially has the same goal as smoothing, was it redundant to have done spatial smoothing during prestats? I've heard that the TFCE has a more stringent correction. If that's the case, why do we also do a FWE?

Also, is there a way to look at numbers of voxels in clusters in FSLView?

Thanks,
Sarah

On Mon, Apr 20, 2015 at 1:49 AM, Anderson M. Winkler <[log in to unmask]> wrote:
Hi Sarah,

Please, see below:


On 16 April 2015 at 20:58, Sarah Izen <[log in to unmask]> wrote:
Hi Anderson,

Thanks so much for all your help. I ran the dual regression and randomise and I guess it automatically does the TFCE per the command I used but I'm wondering if you could give me some insight as to which randomise output might be the best for my situation. What are some reasons to choose TFCE or cluster-based, and further, cluster-based extent or mass?

The dual_regression does by default TFCE only, but cluster extent and/or mass may be included too. Each of these statistics tell slightly different things about the data, following their respective definitions. TFCE doesn't require a cluster-forming threshold, which is generally a benefit. In principle any of these can be used, as long as properly corrected for multiple testing.

 

Additionally, for some reason my tstat map does not seem to be matching up with my uncorrected p map. The tstat map shows mostly the same activations but with many more voxels activated. Do you have any idea why this could be? I compared them setting my tstat min to 1.234 which should be my cutoff for .05 significance. I am attaching a screenshot of the two of them next to each other.

The map on the left is a t-statistic. If various assumptions about the data were guaranteed to be valid, and the sample size very large, the cutoff for p=0.05 would be about t=1.64 (so, higher than 1.234). The map on the right has the p-values not for the t-statistic, but for the TFCE computed using the t-statistic.

To make a more fair comparison (but still rough), threshold the image on the left at 1.64 (or compute the exact value given your degrees of freedom of the test), and replace the image on the right for the dr_stage3_ic0016_vox_p_tstat1, threshold at 0.95. This file isn't produced by default, and if you don't have it, you'd have to run randomise again, now including the option -x, and if you are using the latest FSL, also adding the option --uncorrp (you can edit the dual_regression script, there is a clear indication for this towards the end of the script).

You don't actually need to do this, it's just in case you'd like to understand better what is going on.

 

If I compare a component of my input data to that same component's randomise output files the activation doesn't always match up.

There is no guarantee that they would match. If you only want to see the voxels that are part of a given network, consider using a mask. See this question in the FAQ: http://fsl.fmrib.ox.ac.uk/fsl/fslwiki/DualRegression/Faq

 
Does the activation in the output files signify areas that are functionally connected to the networks I used as my input data, or is the output data supposed to show areas of the input network that differ based on my covariate (behavioral scores)?

Neither I'm afraid. The maps show differences between groups, taking the behavioural as nuisance, or where the nuisance is associated (correlated) with the networks having the group as nuisance. It depends on the contrast used with the GLM.

All the best,

Anderson


 

Again, thanks so much. I really appreciate you taking the time to help me!

Sarah

On Thu, Apr 16, 2015 at 2:02 AM, Anderson M. Winkler <[log in to unmask]> wrote:
Hi Sarah,

Very quick:
- No need for -1
- Use the example from the manual (2 EVs, being one intercept and one for behavioural scores).
- No need to demean the behavioural EV, unless you wanted to test the intercept -- see Jeanette Mumford's page: http://mumford.fmripower.org/mean_centering
- No need for -D

All the best,

Anderson


On 15 April 2015 at 18:44, Sarah Izen <[log in to unmask]> wrote:
Hi Anderson,

Thanks for your help. Just want to make sure I fully understand what you're saying. So would it be correct that I should not use the -1 option since I have the additional covariate?

I ran dual regression and randomise with this command:

dual_regression <4d_input> 1 design.mat design.con 500 <output> <inputs>

Is this correct? I'm a little concerned about my results because my corrected p-value results for each component don't correspond with activation in the input components at all. Should my corrected p-value images show areas of activation of my input components that are significantly correlated with my EV or is it showing areas that are functionally connected to the areas activated in my input components?

I also just want to make sure I set up my design files in the right way because I was a little confused by the GLM wiki page. If I have one group with one EV do I set up the design with two EVs (i.e. group mean and behavioral scores) or just one EV (behavioral scores)? Am I correct in thinking I can just do the one EV if I add the -D option at the end of my dual regression command?

Sorry for so many questions! I really appreciate your help.


Sarah

On Fri, Apr 10, 2015 at 2:25 AM, Anderson M. Winkler <[log in to unmask]> wrote:
Hi Sarah,

Please, see below:


On 8 April 2015 at 21:56, Sarah Izen <[log in to unmask]> wrote:
Hi,

Thanks so much - I realized this right after I emailed the list. I just have a couple of questions about my design. I have one group and one EV (scores on behavioral tests). I just want to make sure I am on the right track with my commands.

For dual regression: dual_regression <4d_input_data> 1 <design.mat> <design.con> 0 <output_directory> <inputs>

For randomise: randomise -i <input> -o <output> -1

Are these correct? I'm not positive about the randomise command - I know the user guide says you do not need to specify design.mat and design.con files for a one sample test but just want to make sure I'm doing it right.

It would be right if it wasn't for the covariate. The simpler randomise call above will use a 1-sample t-test without any other EV. For your design you need the full design.mat and design.con files, as in the example in the GLM manual.

 
Additionally, I know commands can be written to do both dual regression and randomise at the same time but I'm not sure how to do that with my design since I'm assuming dual regression still needs the two design files that randomise doesn't need? I can run them separately but just wondered if there was an easier way.

As you have the extra EV, you won't be using the randomise call as above, so the easier way is to provide the design files to the dual_regression script.

 

Also, with regards to interpreting corrected p-value images for my experiment, if I am looking at behavioral scores does that mean that areas of activation in corrected pvalue images are areas that vary with regard to the subjects' behavioral scores? Just want to make sure I am interpreting this in the right way.

If the contrast tests the behavioural scores, the results are for the behavioural scores. Otherwise it will be testing just the mean.

All the best,

Anderson

 

Thanks so much for your help,
Sarah

On Tue, Apr 7, 2015 at 1:41 AM, Stephen Smith <[log in to unmask]> wrote:
Hi - have another look at the FSL wiki doc on ICA/MELODIC, dual-regression and randomise.   dualreg+randomise should still be the way to go with your design.
Cheers.


On 6 Apr 2015, at 17:28, Sarah Izen <[log in to unmask]> wrote:

Hi all,

I just want to make sure I'm on the right track with my analysis. I am very new to FSL! I have a set of resting state data that I ran using MELODIC. I preprocessed everything, denoised, ran melodic, etc. Now, I want to compare my networks to scores on behavioral tests. From what I was told, I can do this with GLM - single group with additional covariate. But, as I try to set it up I see that I have to input lower level FEAT directories. How can I used my melodic data as an input? I've only really ever seen melodic used with dual regression but that does not seem appropriate in my case since I only have one group of participants.

Thanks,
Sarah


---------------------------------------------------------------------------
Stephen M. Smith, Professor of Biomedical Engineering
Associate Director,  Oxford University FMRIB Centre

FMRIB, JR Hospital, Headington, Oxford  OX3 9DU, UK
+44 (0) 1865 222726  (fax 222717)
[log in to unmask]    http://www.fmrib.ox.ac.uk/~steve
---------------------------------------------------------------------------

Stop the cultural destruction of Tibet