Print

Print


Hi,

Ill try smoothing less and see the results. I know that between feat and
randomise there are many differences but its just that I found odd not being
able to replicate the results I found on feat. Ill do that and get back to
you once im done.

Thank you so much jeanette as always.

Best wishes

-- 
Andres 


From: Jeanette Mumford <[log in to unmask]>
Reply-To: FSL - FMRIB's Software Library <[log in to unmask]>
Date: Mon, 2 Apr 2012 12:51:44 -0500
To: <[log in to unmask]>
Subject: Re: [FSL] Different results between feat and randomise

Hi,

Tom or Steve may have better input than I do, but in my experience TFCE is
not always more robust than RFT based thresholding.   I saw this in a recent
email from Tom Nichols, pertaining to smoothing:

Are you referring to variance smoothing set with the -v option?  My
experience is setting this equal to the applied smoothing of the analysis
(e.g. 5mm FWHM) is all you need to try.  (Note, that randomise expects this
in units of std, so you have to divide FWHM by 2.35 to get sigma, e.g. 5mm
FWHM = 2.1mm sigma).

So, perhaps you are smoothing too much? If 6 is your smoothing in mm, try
6/2.35 instead.

Good luck,

Jeanette

On Mon, Apr 2, 2012 at 10:12 AM, Andres Roman <[log in to unmask]> wrote:
> Hi Jeanette,
> 
> I ran randomise with 1000 permutations and now I obtain a tiny parietal
> cluster but the results from feat do not replicate at all.
> 
> Any hints why this might be?
> 
> B/w
> 
> 
> From: Jeanette Mumford <[log in to unmask]
> <http:[log in to unmask]> >
> Reply-To: FSL - FMRIB's Software Library <[log in to unmask]
> <http:[log in to unmask]> >
> Date: Mon, 2 Apr 2012 08:11:42 -0500
> To: <[log in to unmask] <http:[log in to unmask]> >
> Subject: Re: [FSL] Different results between feat and randomise
> 
> 
> Hi
> 
> If you only run 100 permutations, then the smallest possible p-value is
> 1/100=0.01.  So, by running so few permutations the test will be
> conservative.  Try 1000 instead.
> 
> Cheers,
> Jeanette
> 
> On Mon, Apr 2, 2012 at 6:44 AM, Andres Roman <[log in to unmask]
> <http:[log in to unmask]> > wrote:
>> Dear FSL List,
>> 
>> I have been running fsl on a delayed response task using an epoch which is
>> the length of every trial and subsequently adding a parametric modulator of
>> load (3,4,5 or 6). I have two groups and im using a very simple [1 -1] [-1
>> 1] contrast. I usually run feat first since, it is fairly simple to do so
>> and create the design matrix, and then using the same design.mat and
>> design.con files I run randomise using the 4D concatenated
>> cope1/filtered_func_data.nii.gz file. The script I use in randomise is this:
>> 
>> randomise -i cope1.feat/filtered_func_data.nii.gz -o
>> ADHD_CON_vs_CON_PM_67_exc_cov -d design.mat -t design.con -v 6 -T -V -n 100
>> 
>> I use a variance smoothing of 6 since that is the smoothing I used on the
>> first level analyses I did for this dataset and I'm using -T for TFCE. I ran
>> randomise only for a 100 permutations in order to save some time and get
>> preliminary results.
>> 
>> My problem is that I keep getting different results in feat than the results
>> I get in randomise. When I say different results that means that usually
>> feat is more 'sensitive' since I do get activation using it but then I lose
>> it when I run randomise for exacty the same design matrix. I find this very
>> odd since randomise is supposed to be quite robust and the results
>> replicable. Even more so, some people would argue that with TFCE I should
>> obtain at least similar activation patterns than those found in feat using
>> the arbitrarily selected threshold of z=2.3 for the cluster based analysis.
>> 
>> Id be very happy if you could help me understand this.
>> 
>> Thank you very much for your help.
>> 
>> Best wishes.
>> --
>> Andres
> 
>