Print

Print


Hi Anderson,

I had a quick question about visualizing my mcfwep images from PALM.  I used FLIRT to register these to a 0.5mm space and then have been trying to overlay them onto a 0.5mm standard space brain in FSLview.  That all works out fine.  I’m a little lost as to thresholding it where everything above 1.301 is significant.  Where do I input that value? Normally I would just put in the min/max on the upper right of the FSLview screen, but this doesn’t seem to be working.  

Also, I’m in the middle of writing my results up.  I’ve chosen to use the tfce_mcfwep files, which as I’ve been reading are the most rigorous.  Should I report the cfwep, fwep, and mfwep results as well with the stipulation that they are not corrected across modalities as well as contrasts, rather just across contrasts or modalities?

Thanks again for your help!

Pranish
> On Jun 9, 2016, at 3:20 AM, Anderson M. Winkler <[log in to unmask]> wrote:
> 
> Hi Pranish,
> 
> Try this:
> 
> for x in *mcfwep* ; do echo "${x} $(fslstats ${x} -R)" ; done
> 
> All the best,
> 
> Anderson
> 
> 
> On 9 June 2016 at 04:12, Pranish Kantak <[log in to unmask] <mailto:[log in to unmask]>> wrote:
> Hi Anderson,
> 
> Thanks for that. My analysis has finished, and thankfully it looks like I have some significance.  Is there a way to check all of the logp images short of looking through every single one in FSLview? Tried writing a script with fslstats -R , but that didn’t seem to work.
> 
> Many thanks,
> 
> Pranish
> 
>> On Jun 7, 2016, at 4:56 AM, Anderson M. Winkler <[log in to unmask] <mailto:[log in to unmask]>> wrote:
>> 
>> Hi Pranish,
>> 
>> Please, see below:
>> 
>> On 7 June 2016 at 02:48, Pranish Kantak <[log in to unmask] <mailto:[log in to unmask]>> wrote:
>> Hi Anderson,
>> 
>> I checked the masks, and you are correct! They are the same.  I understand exactly what these p values means in an abstract sense.  For the sake of, say, a manuscript, I would think that the corrected p-values across modalities + contrasts would probably be the most rigorous?
>> 
>> Yes.
>>  
>> Additionally, it also output tfce results of the same p values.  I guess I’m just a little confused, because the randomise call in dual regression also outputs these files.  
>> 
>> Yes, like randomise, PALM produces also p-values for the TFCE statistic. There is a table at the far end of the User Guide page outlining the similarities and differences.
>>  
>> Also, when you say threshold at 1.301 I assume that it goes from 0 - 1.301 when I am adjusting in FSLview?
>> 
>> No no. Threshold is from 1.301 upwards. Higher -log(p), more significant.
>>  
>> Is there a reason that when I threshold from 0 - 0.05 it does not look correct (If didn’t do the -logp).
>> 
>> It is correct -- only harder to see as we usually seek for brighter areas, and with p-values, we'd look into those closer to zero. 
>>  
>> I’m running the analysis again using -logp so we’ll see how it looks.  Finally, the dual regression call doesn’t seem to output uncorrected p values.  I only get the t-stat image and then the tfce_corrp_tstat images (from what I understand these are corrected for FWER)
>> 
>> That's because of the -nouncorrected option. With many components, and if using the tail approximation as indicated, running for the uncorrected requires a lot of memory.
>>  
>> 
>> Thanks again for all your help, and sorry for the basic questions, teaching myself a lot of this, and it tends to get quite complicated!
>> 
>> You may want to have a look into the manual. Various of these details are explained there.
>> 
>> All the best,
>> 
>> Anderson
>> 
>>  
>> 
>> Pranish
>> 
>>> On Jun 6, 2016, at 2:29 AM, Anderson M. Winkler <[log in to unmask] <mailto:[log in to unmask]>> wrote:
>>> 
>>> Hi Pranish,
>>> 
>>> Please, see below:
>>> 
>>> On 5 June 2016 at 18:03, Pranish Kantak <[log in to unmask] <mailto:[log in to unmask]>> wrote:
>>> Hi Anderson,
>>> 
>>> Thanks for your response.  I’m not entirely understanding what you are saying about the masks.  One mask was created from the Melodic call.  The other was created from dual regression.  Am I correct in thinking that the mask from the ICA is literally just a binary mask of the entire brain? And its literally the same thing for the dual regression?
>>> 
>>> It's possible to open the file and see what is inside. It should be a common mask that removes all constant voxels across subjects (i.e., voxels with zero variance). If ICA was run on the same subjects, the mask is quite likely the very same.
>>> 
>>> This is irrelevant anyway as in addition to whatever mask that is supplied, PALM will further remove voxels that also have zero variance and possibly produce a tighter mask. Use the option -savemask to save it to the disk.
>>> 
>>>  
>>> 
>>> Also, the PALM has run and it has output quite a bit of files from the corrections across modalities and contrasts.  How do I go about interpreting these?
>>> 
>>> The manual contains the descriptions for all outputs. In short: uncp are uncorrected, fwep corrected across space, cfwep corrected across space and contrasts, mfwep corrected across space and modalities, and finally mcfwep corrected across space, contrasts and modalities. Here each component is a modality.
>>> 
>>>  
>>> As in, do I threshold from 0.95 - 1 as I would from the output of randomise?
>>> 
>>> Nope. PALM saves p-values, not 1-p. If you'd like 1-p, use the option -save1-p. Alternatively, consider the option -logp, that saves as -log(p), and thus can be thresholded at -log(0.05) = 1.3 and helps to make nicer, readable figures.
>>> 
>>>  
>>> Lastly, would the FDR correction just be a lot easier?
>>> 
>>> You mean, the command? Yes, it's very easy, no doubt. Only note that then the correction is across space only, not across modalities and/or contrasts. Although, please be careful with FDR across modalities & contrasts as the amount of false positives within these may not be at the level q, only globally (this is expected).
>>> 
>>>  
>>> I would just have to feed each of the t statistic images into the fdr command and use the FDR-corrected p-value for the “significance” threshold right?
>>> 
>>> The fdr command takes in uncorrected p-values. You'd enter these uncorrected p-values and it will correct across space (voxels) using FDR.
>>> 
>>> All the best,
>>> 
>>> Anderson
>>> 
>>>  
>>> 
>>> Thanks again for all you’ve helped this week, I appreciate it.
>>> 
>>> Pranish
>>> 
>>>> On Jun 5, 2016, at 5:31 AM, Anderson M. Winkler <[log in to unmask] <mailto:[log in to unmask]>> wrote:
>>>> 
>>>> Hi Pranish,
>>>> 
>>>> Please, see below:
>>>> 
>>>> 
>>>> On 5 June 2016 at 00:43, Pranish Kantak <[log in to unmask] <mailto:[log in to unmask]>> wrote:
>>>> Hi Anderson,
>>>> 
>>>> Thank you for your input, I’m running the PALM right now, we’ll see what happens.  Is the mask that is inputed just the mask from dual regression? Or is it the mask from the ICA decomposition?  
>>>> 
>>>> Is there any reason why these aren't the same? Use the mask that is consistent with the hypothesis (with ICA we generally look into the whole brain).
>>>> 
>>>>  
>>>> Also, I chose not to demean, but just in case I’m wrong i’m attaching the design.mat and design.con files to this email.  
>>>> 
>>>> No need for -demean with this design.
>>>>  
>>>> So as I understand it, PALM corrects the p values for FWER, could I just as well use the dr_stage3_tstat?.nii.gz files and use the fdr command in FSL to correct for false discovery?
>>>> 
>>>> Yes, or use the option -fdr in PALM.
>>>> 
>>>> All the best,
>>>> 
>>>> Anderson
>>>> 
>>>>  
>>>> 
>>>> Thanks again for all the help, I really appreciate it.
>>>> 
>>>> Pranish
>>>> 
>>>> 
>>>> 
>>>> 
>>>>> On Jun 4, 2016, at 6:06 AM, Anderson M. Winkler <[log in to unmask] <mailto:[log in to unmask]>> wrote:
>>>>> 
>>>>> Hi Pranish,
>>>>> 
>>>>> Please, see below:
>>>>> 
>>>>> 
>>>>> On 3 June 2016 at 11:59, Pranish Kantak <[log in to unmask] <mailto:[log in to unmask]>> wrote:
>>>>> Hi Anderson,
>>>>> 
>>>>> Thanks, I figured as much.  Am I correct in assuming that using Bonferroni, I would do p < 0.05/15? If I’m only looking at 1 contrast (cases > controls) in one direction over 15 out of the 30 components?  
>>>>> 
>>>>> Yes. Only thing is that looking at both directions is likely more informative (why suspect that, necessarily, cases > controls? The opposite would seem just as reasonable in principle).
>>>>> 
>>>>>  
>>>>> This seems extremely conservative (although I do have some dr_stage3 files that pass this correction).  Would PALM be a better option in terms of less conservative?
>>>>> 
>>>>> Yes, correction in PALM will be less conservative than Bonferroni.
>>>>> 
>>>>>  
>>>>> And if so, is the following syntax correct: 
>>>>> 
>>>>> -i dr_stage2_ic00??.nii.gz design.mat design.con [these would be the same as from dual regression?] -corrmod -m mask.ii [from dual regression] -T -n 5000 -corrcon -logp -demean -o palm_results
>>>>> 
>>>>> Try something as this:
>>>>> 
>>>>> palm -i dr_stage2_ic0001.nii -i dr_stage2_ic0002.nii -i dr_stage3_ic0001.nii -d design.mat -t design.con -corrmod -corrcon -m mask.nii -T -n 500 -accel tail -nouncorrected -o palm_results
>>>>> 
>>>>> The "-accel tail" is a new option that expedite the test and gives results very similar to the non-accelerated case. It's described here:
>>>>> http://fsl.fmrib.ox.ac.uk/fsl/fslwiki/PALM/FasterInference <http://fsl.fmrib.ox.ac.uk/fsl/fslwiki/PALM/FasterInference>
>>>>> 
>>>>> Regarding "-demean", it depends on the design. I'm not seeing it, so I'm not including it. It's up to you to include it or not.
>>>>> 
>>>>> All the best,
>>>>> 
>>>>> Anderson
>>>>> 
>>>>>  
>>>>> 
>>>>> Thanks again,
>>>>> 
>>>>> Pranish
>>>>> 
>>>>>> On Jun 3, 2016, at 3:59 AM, Anderson M. Winkler <[log in to unmask] <mailto:[log in to unmask]>> wrote:
>>>>>> 
>>>>>> Hi Pranish,
>>>>>> 
>>>>>> Yes, need to adjust. Either use Bonferroni over the components looked at, or use PALM with the option -corrmod.
>>>>>> 
>>>>>> All the best,
>>>>>> 
>>>>>> Anderson
>>>>>> 
>>>>>> 
>>>>>> On 2 June 2016 at 22:25, Pranish Kantak <[log in to unmask] <mailto:[log in to unmask]>> wrote:
>>>>>> Hi Anderson,
>>>>>> 
>>>>>> Ah! I think I understand it now.  My final question, I promise! With regards to the reference networks, when I run my dual_regression command (since I’m only looking at networks that significantly (r>0.25) correlate with Smith et al., 2009) do I need to change the p-value threshold? Right now I’m looking at p<0.05.  Just to give you more data i have 14 controls, 12 patients, scanned pre and post intervention.  I ran Melodic extracting 30 components, did fslcc to find the networks of interest, and then fed the melodic_IC.nii.gz file into dual_regression with 5000 permutations.  I only picked the significant ffce_corrp_tstats that were the same components as the ones that correlated with the reference networks.  I think I got roughly 10 tfce_corrp_tstat significant results (for the contrast cases > controls).  I’m wondering if I need to adjust the p-value to a more stringent value or not?  15 of my 30 components correlated with Smith et al., 2009 reference networks.
>>>>>> 
>>>>>> Thanks for all your help!
>>>>>> 
>>>>>> Pranish
>>>>>>> On Jun 2, 2016, at 3:44 AM, Anderson M. Winkler <[log in to unmask] <mailto:[log in to unmask]>> wrote:
>>>>>>> 
>>>>>>> Hi Pranish,
>>>>>>> 
>>>>>>> Please see below:
>>>>>>> 
>>>>>>> On 1 June 2016 at 22:45, Pranish Kantak <[log in to unmask] <mailto:[log in to unmask]>> wrote:
>>>>>>> Hi Anderson,
>>>>>>> 
>>>>>>> Thank you for that explanation, it helped clarify in my head exactly what is happening.  One place where I’m a little confused still though is when I read different papers.  I know you said that is kind of arbitrary, unless you use the mixture modeling, but where do these numbers come from?
>>>>>>> 
>>>>>>> They are indeed arbitrary -- i.e., no particular objective rule to define them.
>>>>>>>  
>>>>>>> For example…in Reineberg et al., 2015 in Neuroimage, his IC maps say they’re thresholded at Z > 5, whereas in the Smith et al., 2009 paper in Neuroimage, the IC maps are thresholded at Z=3.  My question is basically where do these numbers come from and where do they get specified?
>>>>>>> 
>>>>>>> These might have been chosen so as to be the same across all ICs, whereas the mixture modelling would have given one threshold for each map. The value itself could perhaps have been different (say, z=4) in either study and the conclusions would have probably been the same.
>>>>>>>  
>>>>>>> I’m attaching the mixture model PNG and some of the output files from the Melodic from one of my significantly correlated IC maps…do you think you could help me discern this? For example, when I set the contrast for this thresh_zstat.nii.gz in FSL view as [4, 14] does that mean my z value is 4 < Z < 14? 
>>>>>>> 
>>>>>>> In my previous response the threshold and the limits used to display the image were somewhat mixed in the same paragraph, sorry for that. The arbitrary [-2, 11] refer to the window for display, whereas a threshold for the components themselves can use the mixture modelling. The latter is an objective rule, but even here, the common 50:50 weight on false positives and false negatives is arbitrary on its own right too.
>>>>>>> 
>>>>>>> At any rate, yes, setting the window for [4, 14] means that what is shown are the z-scores between 4 and 14 (in this case, all positive only).
>>>>>>> 
>>>>>>> The mixture modelling as you show below includes two gamma distributions, and here too it's possible to find a threshold that yields a known proportion between false positives (fraction of the main gaussian above the threshold) and false negatives (fraction of the right gamma below the threshold). It's hard to tell visually where the threshold would be, but Melodic should produce thresholded maps after this has been calculated.
>>>>>>> 
>>>>>>> Hope this helps.
>>>>>>> 
>>>>>>> All the best,
>>>>>>> 
>>>>>>> Anderson
>>>>>>> 
>>>>>>> 
>>>>>>>  
>>>>>>> 
>>>>>>> Sorry about the series of basic questions, just trying to make sure I am reporting my data correctly.
>>>>>>> 
>>>>>>> Thanks again,
>>>>>>> 
>>>>>>> Pranish
>>>>>>> 
>>>>>>> <IC_7_MMfit.png>
>>>>>>>> On Jun 1, 2016, at 3:32 AM, Anderson M. Winkler <[log in to unmask] <mailto:[log in to unmask]>> wrote:
>>>>>>>> 
>>>>>>>> Hi Pranish,
>>>>>>>> 
>>>>>>>> Please, see below:
>>>>>>>> 
>>>>>>>> On 1 June 2016 at 00:12, Pranish Kantak <[log in to unmask] <mailto:[log in to unmask]>> wrote:
>>>>>>>> Hi Anderson & FSL experts,
>>>>>>>> 
>>>>>>>> Thank you so so much! I appreciate your help.  Two last questions if you wouldn’t mind answering, I’d greatly appreciate it:
>>>>>>>> 
>>>>>>>> 1) So I took a chance and reran the Melodic for all of my data…for some reason now, it looks good and is correlating with Smith 2009 networks…any idea why this is? I guess I should count my blessings…
>>>>>>>> 
>>>>>>>> I have no idea.
>>>>>>>>  
>>>>>>>> 
>>>>>>>> 2) In terms of thresholding the maps, could you explain how the thresh_zstat images are thresholded?
>>>>>>>> 
>>>>>>>> The ICA factorises the input data, previously reorganised as a matrix of timecourses by voxels, into a product of a matrix of timecourses by components and another that is components by voxels, plus a matrix of timecourses by voxels that correspond to noise, i.e., variation not captured by any of the components. The z-score at each voxel is using the value of the component at that voxel from the matrix of components by voxels (spatial maps for each IC), divided by the variance of the noise at that voxel, already accounting for the degrees of freedom lost given the number of ICs.
>>>>>>>> 
>>>>>>>>  
>>>>>>>> When I view the images, I set the min to 4 and the max to 15, just like it is on the ICA practical website.  If I’m not mistaking though, this is just min/max of the images in terms of brightness contrast? Or is it something else…what is the standard for imaging papers? Z=3? and how do I set this, or is it automatically set??
>>>>>>>> 
>>>>>>>> There is no particular way for thresholding. I use for most maps a range [-2, 11], but that is totally empirical. It's possible to make the thresholding more objective through the use of the mixture modelling, in which (usually) a 50% weight on false positives and false negatives is used. When running Melodic, include the option --mm to generate thresholded maps ready to display.
>>>>>>>> 
>>>>>>>> All the best,
>>>>>>>> 
>>>>>>>> Anderson
>>>>>>>> 
>>>>>>>>  
>>>>>>>> 
>>>>>>>> Thanks again!
>>>>>>>> 
>>>>>>>> Pranish
>>>>>>>> 
>>>>>>>>> On May 31, 2016, at 3:56 AM, Anderson M. Winkler <[log in to unmask] <mailto:[log in to unmask]>> wrote:
>>>>>>>>> 
>>>>>>>>> Hi Pranish,
>>>>>>>>> 
>>>>>>>>> Please, see below:
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> On 30 May 2016 at 14:04, Pranish Kantak <[log in to unmask] <mailto:[log in to unmask]>> wrote:
>>>>>>>>> Hi Anderson & FSL Experts,
>>>>>>>>> 
>>>>>>>>> Thank you for the response.  How would you go about running a simulated experiment to demonstrate that the error rate is controlled for? 
>>>>>>>>> 
>>>>>>>>> One way would be simulating everything, but perhaps simpler and equally effective would be:
>>>>>>>>> 
>>>>>>>>> 1) Take a sample of controls only, say 50 subjects.
>>>>>>>>> 2) Subdivide randomly in "true controls" and "pseudo patients", 25 subjects for each.
>>>>>>>>> 3) Run Melodic on the 25 "true controls" using -d 20 (or some other fixed number).
>>>>>>>>> 4) Use the ICs found in #3 and run the dual regression comparing 25 vs. 25 as a two-sample t-test with, say, 2000 permutations.
>>>>>>>>> 5) Threshold the uncorrected p-values at 0.05 and count how many voxels survived. It should be about 5% for each IC. Take note of this number for all ICs.
>>>>>>>>> 6) Repeat steps 2-5 many times, say, 2000 times, each time writing down, for each IC, the percentage of voxels that survive.
>>>>>>>>> 7) Sort the 2000*20 = 40k results and find the 2.5 and 97.5 percentiles that determine the 95% confidence interval. If 5% is below this interval, then we have an excess false positives and this method should not be used. If 5% is above this interval, then this method is conservative. If within, then we may probably use it.
>>>>>>>>> 
>>>>>>>>> Note that in the above, the ICs are all mixed together. The reasoning is that if there is no effect, the amount of false positives should be distributed evenly across all. Moreover, mapping one component into another across iterations mightn't be always trivial. However, it's possible to simplify and take only the component that is more likely to be present in all runs, i.e., the DMN, and work only with that, ignoring all the others.
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>>  
>>>>>>>>> So I ran fslcc with the cases as well.  I’m copying my data in below:
>>>>>>>>> 
>>>>>>>>> Controls:
>>>>>>>>> Smith 2009 Reference Network	My Network	Cross Correlation
>>>>>>>>> 1 - medial visual 	2	0.373
>>>>>>>>> 1 - medial visual 	4	0.617
>>>>>>>>> 3 - lateral visual	2	0.605
>>>>>>>>> 3 - lateral visual	12	0.319
>>>>>>>>> 4 - DMN	1	0.338
>>>>>>>>> 4 - DMN	4	0.414
>>>>>>>>> 4 - DMN	13	0.429
>>>>>>>>> 4 - DMN	19	0.419
>>>>>>>>> 5 - Cerebellum	15	0.244
>>>>>>>>> 5 - Cerebellum	30	0.376
>>>>>>>>> 6 - Sensorimotor	11	0.333
>>>>>>>>> 6 - Sensorimotor	12	0.266
>>>>>>>>> 6 - Sensorimotor	18	0.351
>>>>>>>>> 6 - Sensorimotor	21	0.578
>>>>>>>>> 7 - Auditory	9	0.217
>>>>>>>>> 7 - Auditory	14	0.392
>>>>>>>>> 7 - Auditory	18	0.491
>>>>>>>>> 7 - Auditory	19	0.298
>>>>>>>>> 7 - Auditory	27	0.245
>>>>>>>>> 8 - Executive Control	1	0.456
>>>>>>>>> 8 - Executive Control	7	0.432
>>>>>>>>> 8 - Executive Control	10	0.205
>>>>>>>>> 8 - Executive Control	15	0.209
>>>>>>>>> 8 - Executive Control	18	0.246
>>>>>>>>> 8 - Executive Control	20	0.355
>>>>>>>>> 9 - Frontoparietal	3	0.325
>>>>>>>>> 9 - Frontoparietal	13	0.27
>>>>>>>>> 9 - Frontoparietal	28	0.264
>>>>>>>>> 10	9	0.358
>>>>>>>>> 10	12	0.352
>>>>>>>>> 10	13	0.27
>>>>>>>>> 
>>>>>>>>> Cases:
>>>>>>>>> 
>>>>>>>>> Smith 2009 Reference Network	My Network	Cross Correlation
>>>>>>>>> 1	2	0.209
>>>>>>>>> 1	4	0.286
>>>>>>>>> 1	11	0.49
>>>>>>>>> 1	24	0.514
>>>>>>>>> 2	4	0.226
>>>>>>>>> 2	11	0.334
>>>>>>>>> 2	23	0.394
>>>>>>>>> 3	4	0.635
>>>>>>>>> 4	3	0.352
>>>>>>>>> 4	19	0.442
>>>>>>>>> 4	24	0.417
>>>>>>>>> 6	2	0.209
>>>>>>>>> 6	10	0.299
>>>>>>>>> 6	14	0.531
>>>>>>>>> 6	15	0.233
>>>>>>>>> 6	18	0.217
>>>>>>>>> 7	2	0.596
>>>>>>>>> 7	8	0.229
>>>>>>>>> 7	14	0.219
>>>>>>>>> 7	19	0.223
>>>>>>>>> 7	27	0.344
>>>>>>>>> 8	2	0.234
>>>>>>>>> 8	3	0.435
>>>>>>>>> 8	10	0.32
>>>>>>>>> 8	17	0.281
>>>>>>>>> 9	5	0.436
>>>>>>>>> 9	6	0.239
>>>>>>>>> 9	18	0.235
>>>>>>>>> 10	8	0.256
>>>>>>>>> 10	15	0.261
>>>>>>>>> 10	18	0.484
>>>>>>>>> 
>>>>>>>>> I’m also running now a Melodic script with just my pre intervention rsFMRI scans (with both cases and controls).  Do you think that might help? Any advice is so greatly appreciated.
>>>>>>>>> 
>>>>>>>>> A suggestion here is to use fslcc on the thresholded networks, as the noise outside the networks may drag the correlations down.
>>>>>>>>> 
>>>>>>>>> All the best,
>>>>>>>>> 
>>>>>>>>> Anderson
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>>  
>>>>>>>>> 
>>>>>>>>> Thanks for all the help!
>>>>>>>>> 
>>>>>>>>> Pranish
>>>>>>>>> 
>>>>>>>>>> On May 30, 2016, at 3:46 AM, Anderson M. Winkler <[log in to unmask] <mailto:[log in to unmask]>> wrote:
>>>>>>>>>> 
>>>>>>>>>> Hi Pranish,
>>>>>>>>>> 
>>>>>>>>>> I'd be reluctant to endorse the approach of using controls only (or using a single timepoint), instead of using either all subjects and timepoints or a balanced random subset of them. I understand that this approach has appeared in the literature, but I would like to see a simulated experiment that would demonstrate that the error rate is controlled even if a controls-only set of maps is used. Without that, I'd be prudent in taking this route.
>>>>>>>>>> 
>>>>>>>>>> What happens when you run with cases only? Could you perhaps try to identify a one or a few subjects that might be driving ICA completely off?
>>>>>>>>>> 
>>>>>>>>>> All the best,
>>>>>>>>>> 
>>>>>>>>>> Anderson
>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>>> On 28 May 2016 at 14:40, Pranish Kantak <[log in to unmask] <mailto:[log in to unmask]>> wrote:
>>>>>>>>>> Dear FSL Experts,
>>>>>>>>>> 
>>>>>>>>>> Thank you in advance with advice on this problem.  I am having a bit of a confusing problem.  It is as such:
>>>>>>>>>> 
>>>>>>>>>> I have a 2x2 ANOVA experimental design for a rs-fMRI experiment (cases vs. controls ; pre-intervention vs. post-intervention).  I ran MELODIC with d=30 components, and then ran a dual regression.  When I check my melodic_IC.nii.gz (2 scans per case, 2 scans per control concatenated) file against reference networks (using fslcc, with -t=.204) provided from Yeo2011 and Smith 2009 they do not really correlate with any of the reference networks.
>>>>>>>>>> 
>>>>>>>>>> So I figured that I probably shouldn't collapse across cases and controls when I am identifying RSNs.  So I took out the cases (and left pre-intervention and post intervention) and ran MELODIC again.  This outputted melodic_IC_controls.nii.gz.  When I use fslcc to compare this file with reference networks, I got quite a bit of correlation.
>>>>>>>>>> 
>>>>>>>>>> My main question is A) Is it ok to do this just to identify which components are which types of networks and B) If I use the original melodic_IC.nii.gz (2 scans per case, 2 scans per control concatenated) for the dual_regression, do the component numbers still correlate with the networks i've identified using melodic_IC_controls.nii.gz?  That's my biggest problem.
>>>>>>>>>> 
>>>>>>>>>> Thank you so much for your help! I'm extremely new to fMRI so any help with this matter would be greatly appreciated.
>>>>>>>>>> 
>>>>>>>>>> Cheers,
>>>>>>>>>> 
>>>>>>>>>> Pranish Kantak
>>>>>>>> 
>>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>>> 
>>>>> 
>>>>> 
>>>> 
>>>> 
>>>> 
>>> 
>>> 
>> 
>> 
> 
>