Print

Print


Sending again because I'm not sure this message went through the first time...

---------- Forwarded message ----------
From: Sarah Izen <[log in to unmask]>
Date: Thu, Jun 11, 2015 at 12:36 PM
Subject: Re: [FSL] MELODIC experiment design
To: FSL - FMRIB's Software Library <[log in to unmask]>


Hi Anderson (or anyone else who wants to answer!),

Hoping you won't mind another (rather basic) question about my design. I only have one group and initially I just wanted to examine how functional connectivity changed as a function of one behavioral measure. Now I want to expand my analysis to have six behavioral measures plus sex and age. I'm a little unsure how to model the contrasts. Essentially, I am just interested to see if any of these variables has an effect on functional connectivity. Here is what I have so far.

9 EVs:
EV1: group mean
EV2: sex
EV3: age
EV4: behavioral measure 1
EV5: behavioral measure 2
EV6: behavioral measure 3
EV7: behavioral measure 4
EV8: behavioral measure 5
EV9: behavioral measure 6

I am quite new at all of this so I'm really not sure if I'm going in the right direction but here is my initial matrix:

          EV1     EV2     EV3     EV4     EV5     EV6     EV7     EV8     EV9

C1        0          1          0         0           0        0          0         0         0
C2        0          0          1         0           0        0          0         0         0
C3        0          0          0         1           0        0          0         0         0
C4        0          0          0         0           1        0          0         0         0
C5        0          0          0         0           0        1          0         0         0
C6        0          0          0         0           0        0          1         0         0
C7        0          0          0         0           0        0          0         1         0
C8        0          0          0         0           0        0          0         0         1

It may turn out to be the case that we don't actually care about sex and age and so wouldn't need a contrast for them, just for them to be regressed out, but I'm kind of interested to see initially if this has any significance. So, I have two questions. Does the order of the EVs in the design matrix matter at all? Also, if I want to look at negative contrasts, does it make more sense to double the amount of contrasts and have a -1 for each or to run a separate analysis, also with 8 contrasts, but with only -1s?

Thanks!

Sarah

On Thu, Apr 23, 2015 at 3:29 AM, Anderson M. Winkler <[log in to unmask]> wrote:
Hi Sarah,

Please, see below:

On 22 April 2015 at 21:23, Sarah Izen <[log in to unmask]> wrote:
Hi,

Thanks so much. I really appreciate your help. My lab mostly uses SPM so FSL is totally new territory for us. I may have been confused the "required effect" number in the .con file. Is the required effect number the same as the t-stat? For some reason I am getting a really high required effect (3.879). I have 24 participants.

You can pretty much ignore this value. It's difficult to interpret it in the dual_regression context and isn't used by randomise.
 

Also, I can't find any documentation on what type of FWE is used.
 

Have you checked the randomise webpage? http://fsl.fmrib.ox.ac.uk/fsl/fslwiki/Randomise

 
It's obvious from my files that the FWE is getting rid of A LOT of stuff. I'm also wondering what exactly TFCE does. Is it an enhancement or some type of multiple comparisons correction?
 

TFCE is a way to identify strong and spatially distributed signals. It is described in this paper: http://www.ncbi.nlm.nih.gov/pubmed/18501637
It is an enhancement. TFCE results need correction too.
 

If it's enhancement, why do my clusters look smaller in my tfce p image than in the raw tstat image?


The size of the surviving regions depends on significance. Are you looking at the same correction (i.e., fwep for both, or uncp for both?). At any rate, while TFCE tends to be more powerful, the local configuration of the signal in the neighbours and support region don't guarantee more power always.

 
Also, if the TFCE essentially has the same goal as smoothing, was it redundant to have done spatial smoothing during prestats?


TFCE does not have the same goals as smoothing. It's something entirely different.

 
I've heard that the TFCE has a more stringent correction. If that's the case, why do we also do a FWE?


TFCE isn't a correction, but a way to strengthen the ability to identify spatial signals. It isn't more or less stringent, as the stringency depends on the permutation test, which ensures exactness of p-values. It tends to be more powerful, though.

 

Also, is there a way to look at numbers of voxels in clusters in FSLView?

I'm not sure in FSLview, but you can get that easily with the command "cluster", or thresholding with "fslmaths" then using "fslstats".

All the best,

Anderson


 

Thanks,
Sarah

On Mon, Apr 20, 2015 at 1:49 AM, Anderson M. Winkler <[log in to unmask]> wrote:
Hi Sarah,

Please, see below:


On 16 April 2015 at 20:58, Sarah Izen <[log in to unmask]> wrote:
Hi Anderson,

Thanks so much for all your help. I ran the dual regression and randomise and I guess it automatically does the TFCE per the command I used but I'm wondering if you could give me some insight as to which randomise output might be the best for my situation. What are some reasons to choose TFCE or cluster-based, and further, cluster-based extent or mass?

The dual_regression does by default TFCE only, but cluster extent and/or mass may be included too. Each of these statistics tell slightly different things about the data, following their respective definitions. TFCE doesn't require a cluster-forming threshold, which is generally a benefit. In principle any of these can be used, as long as properly corrected for multiple testing.

 

Additionally, for some reason my tstat map does not seem to be matching up with my uncorrected p map. The tstat map shows mostly the same activations but with many more voxels activated. Do you have any idea why this could be? I compared them setting my tstat min to 1.234 which should be my cutoff for .05 significance. I am attaching a screenshot of the two of them next to each other.

The map on the left is a t-statistic. If various assumptions about the data were guaranteed to be valid, and the sample size very large, the cutoff for p=0.05 would be about t=1.64 (so, higher than 1.234). The map on the right has the p-values not for the t-statistic, but for the TFCE computed using the t-statistic.

To make a more fair comparison (but still rough), threshold the image on the left at 1.64 (or compute the exact value given your degrees of freedom of the test), and replace the image on the right for the dr_stage3_ic0016_vox_p_tstat1, threshold at 0.95. This file isn't produced by default, and if you don't have it, you'd have to run randomise again, now including the option -x, and if you are using the latest FSL, also adding the option --uncorrp (you can edit the dual_regression script, there is a clear indication for this towards the end of the script).

You don't actually need to do this, it's just in case you'd like to understand better what is going on.

 

If I compare a component of my input data to that same component's randomise output files the activation doesn't always match up.

There is no guarantee that they would match. If you only want to see the voxels that are part of a given network, consider using a mask. See this question in the FAQ: http://fsl.fmrib.ox.ac.uk/fsl/fslwiki/DualRegression/Faq

 
Does the activation in the output files signify areas that are functionally connected to the networks I used as my input data, or is the output data supposed to show areas of the input network that differ based on my covariate (behavioral scores)?

Neither I'm afraid. The maps show differences between groups, taking the behavioural as nuisance, or where the nuisance is associated (correlated) with the networks having the group as nuisance. It depends on the contrast used with the GLM.

All the best,

Anderson


 

Again, thanks so much. I really appreciate you taking the time to help me!

Sarah

On Thu, Apr 16, 2015 at 2:02 AM, Anderson M. Winkler <[log in to unmask]> wrote:
Hi Sarah,

Very quick:
- No need for -1
- Use the example from the manual (2 EVs, being one intercept and one for behavioural scores).
- No need to demean the behavioural EV, unless you wanted to test the intercept -- see Jeanette Mumford's page: http://mumford.fmripower.org/mean_centering
- No need for -D

All the best,

Anderson


On 15 April 2015 at 18:44, Sarah Izen <[log in to unmask]> wrote:
Hi Anderson,

Thanks for your help. Just want to make sure I fully understand what you're saying. So would it be correct that I should not use the -1 option since I have the additional covariate?

I ran dual regression and randomise with this command:

dual_regression <4d_input> 1 design.mat design.con 500 <output> <inputs>

Is this correct? I'm a little concerned about my results because my corrected p-value results for each component don't correspond with activation in the input components at all. Should my corrected p-value images show areas of activation of my input components that are significantly correlated with my EV or is it showing areas that are functionally connected to the areas activated in my input components?

I also just want to make sure I set up my design files in the right way because I was a little confused by the GLM wiki page. If I have one group with one EV do I set up the design with two EVs (i.e. group mean and behavioral scores) or just one EV (behavioral scores)? Am I correct in thinking I can just do the one EV if I add the -D option at the end of my dual regression command?

Sorry for so many questions! I really appreciate your help.


Sarah

On Fri, Apr 10, 2015 at 2:25 AM, Anderson M. Winkler <[log in to unmask]> wrote:
Hi Sarah,

Please, see below:


On 8 April 2015 at 21:56, Sarah Izen <[log in to unmask]> wrote:
Hi,

Thanks so much - I realized this right after I emailed the list. I just have a couple of questions about my design. I have one group and one EV (scores on behavioral tests). I just want to make sure I am on the right track with my commands.

For dual regression: dual_regression <4d_input_data> 1 <design.mat> <design.con> 0 <output_directory> <inputs>

For randomise: randomise -i <input> -o <output> -1

Are these correct? I'm not positive about the randomise command - I know the user guide says you do not need to specify design.mat and design.con files for a one sample test but just want to make sure I'm doing it right.

It would be right if it wasn't for the covariate. The simpler randomise call above will use a 1-sample t-test without any other EV. For your design you need the full design.mat and design.con files, as in the example in the GLM manual.

 
Additionally, I know commands can be written to do both dual regression and randomise at the same time but I'm not sure how to do that with my design since I'm assuming dual regression still needs the two design files that randomise doesn't need? I can run them separately but just wondered if there was an easier way.

As you have the extra EV, you won't be using the randomise call as above, so the easier way is to provide the design files to the dual_regression script.

 

Also, with regards to interpreting corrected p-value images for my experiment, if I am looking at behavioral scores does that mean that areas of activation in corrected pvalue images are areas that vary with regard to the subjects' behavioral scores? Just want to make sure I am interpreting this in the right way.

If the contrast tests the behavioural scores, the results are for the behavioural scores. Otherwise it will be testing just the mean.

All the best,

Anderson

 

Thanks so much for your help,
Sarah

On Tue, Apr 7, 2015 at 1:41 AM, Stephen Smith <[log in to unmask]> wrote:
Hi - have another look at the FSL wiki doc on ICA/MELODIC, dual-regression and randomise.   dualreg+randomise should still be the way to go with your design.
Cheers.


On 6 Apr 2015, at 17:28, Sarah Izen <[log in to unmask]> wrote:

Hi all,

I just want to make sure I'm on the right track with my analysis. I am very new to FSL! I have a set of resting state data that I ran using MELODIC. I preprocessed everything, denoised, ran melodic, etc. Now, I want to compare my networks to scores on behavioral tests. From what I was told, I can do this with GLM - single group with additional covariate. But, as I try to set it up I see that I have to input lower level FEAT directories. How can I used my melodic data as an input? I've only really ever seen melodic used with dual regression but that does not seem appropriate in my case since I only have one group of participants.

Thanks,
Sarah


---------------------------------------------------------------------------
Stephen M. Smith, Professor of Biomedical Engineering
Associate Director,  Oxford University FMRIB Centre

FMRIB, JR Hospital, Headington, Oxford  OX3 9DU, UK
+44 (0) 1865 222726  (fax 222717)
[log in to unmask]    http://www.fmrib.ox.ac.uk/~steve
---------------------------------------------------------------------------

Stop the cultural destruction of Tibet