Hi,
On 28 Jan 2009, at 15:45, Robert Kelly wrote:
> In that case, could we accomplish the desired effect through
> fslmaths preprocessing of the original fMRI data by subtracting from
> each voxel its mean over time and dividing by its standard
> deviation, before running the lower-level FEATs followed by FEAT OLS
> and randomise unpaired t-test group comparison?
No, sorry - this would not be the same. The mean subtraction is
already done inside FILM timeseries procesing in first-level FEAT, but
if you rescale the variance then you would get completely different
COPE values that would not correspond to either of your previous
analyses (FEAT-based higher-level or randomise).
There is no way currently of getting exactly the same underlying
modelling (as opposed to estimation/inference which is obviously going
to be different) in higher-level FLAME and randomise, although your
original idea of feeding the first-level Zstats into randomise (as
opposed to the "correct OLS" way of feeding COPEs) may indeed be
somewhat closer to what happens in FLAME (wrt weighting) than using
the COPEs - but it's still not the same as feeding COPES+VARCOPES into
FLAME. Wooly can comment if there's more subtlety to comment on here.
But we are working on various ways of extending randomise to include
more sophisticated modelling for the future.
Cheers, Steve.
> I presume that this method would not give the same results as FLAME,
> but would it be a meaningful and valid way of running FEAT OLS with
> variance-normalized COPEs? Would it be better to perform the
> preprocessing after slice timing correction, motion correction and
> spatial smoothing?
>
> Thanks again,
> Robert
>
> ----- Original Message -----
> From: Steve Smith <[log in to unmask]>
> Date: Wednesday, January 28, 2009 3:51 am
> Subject: Re: [FSL] Group comparison using unpaired t-test, FEAT OLS
> vs randomise
>
>> Hi,
>>
>> On 27 Jan 2009, at 21:16, Robert Kelly wrote:
>>
>>> Thank you Steve, for a very quick, clear and precise answer to my
>>
>>> question. I now get identical results with randomise.
>>>
>>> A follow-up question: If we desire a t-test comparison using some
>>
>>> form of variance-normalized COPEs (to compensate for potential
>>> differences in voxel-wise variance across subjects, the way that
>>> comparing Z-stat images instead of COPEs does), would FLAME meet
>>> this objective?
>>
>> Yes - that's the primary point of FLAME (as opposed to simple OLS,
>> which doesn't do any weighting).
>>
>>> If so, after running FLAME is it possible to repeat the group
>>> comparison with randomise and obtain results (raw tstat images)
>>> identical to those of FLAME? If not, do you know of an easy way
>> to
>>> accomplish this goal using FSL?
>>
>> Not currently - randomise is currently just OLS-based - possibly in
>>
>> the future.
>>
>>> The reason I ask is that ultimately, we would like to
>>>
>>> A. Perform a standard, parametric (GRF-theory), cluster
>> thresholding
>>> based on an unpaired t-test voxel-wise comparison of two groups
>> of
>>> subjects (fMRI data); and
>>> B. Compare results with those obtained from randomise.
>>>
>>> I suppose we could stick to comparing FEAT-OLS output with
>> randomise
>>> output, given that this is quite doable, but it would be nice if
>> we
>>> could find an efficient way of performing the same analyses while
>>
>>> adjusting for inter-subject differences in variance at each voxel.
>>
>> Your understanding is correct - at present to make this comparison
>> you
>> would need to use the OLS option in FEAT.
>>
>> Cheers.
>>
>>
>>>
>>>
>>> Cheers,
>>> Robert
>>>
>>>
>>> At 04:13 AM 1/26/2009, you wrote:
>>>> Hi - yes, one big difference here - second-level analyses are
>> fed by
>>>> lower-level COPEs and NOT Z-stat images. Hence to get identical
>>>> results (and indeed you should for the raw tstat images) you should
>>>> feed randomise from the COPEs from lower level. You don't need
>> to do
>>>> the concatenation yourself - you can just use the
>>>> "filtered_func_data"
>>>> from inside the .gfeat/cope?.feat higher-level FEAT analysis,
>> which
>>>> is
>>>> the standard-space concatenated lower-level COPE outputs.
>>>>
>>>> Cheers.
>>>>
>>>>
>>>> On 26 Jan 2009, at 05:37, Robert Kelly wrote:
>>>>
>>>>> Hello,
>>>>>
>>>>> We would like to compare results for a two-sample unpaired t-test
>>>>> performed
>>>>> on the same data in two different ways: 1) using FEAT OLS and
>> 2)
>>>>> using
>>>>> randomise. However, I am puzzled by the observation that the
>> tstat>>> images
>>>>> for the group comparisons, one image generated using FEAT OLS
>> and
>>>>> one
>>>>> using randomise, are not identical. Please explain what causes
>> this>>> discrepancy.
>>>>>
>>>>> Specifically, I performed the following
>>>>>
>>>>> 1. .feat directories from a first-level analysis on fMRI
>> data
>>>>> were
>>>>> fed into
>>>>> a higher-level FEAT analysis, selecting on the Stats tab “Mixed
>>>>> Effects: Simple
>>>>> OLS” and selecting with the Model Setup Wizard “two groups,
>>>>> unpaired.”
>>>>>
>>>>> 2. Files from the .gfeat directory produced in the last
>> step
>>>>> were
>>>>> fed to
>>>>> randomise in an attempt to perform the identical two-group
>>>>> comparison, using
>>>>> the command
>>>>> “randomise -i TwoSamp4D -o TwoSampT -d design.mat -t design.con
>> -m
>>>>> mask n 1000 c 2.8689 V”
>>>>> where design.mat, design.con, and mask were taken from the .gfeat
>>>>> directory;
>>>>> and TwoSamp4D was created by concatenating the standard-space
>>>>> transformed zstat images from the lower-level, .feat directories
>>>>> (using flirt
>>>>> with reg/standard, stats/zstat1.nii.gz, and reg/
>>>>> example_func2standard.mat).
>>>>>
>>>>> 3. Comparing TwoSampT_tstat1.nii.gz from randomise with
>>>>> cope1.feat/stats/tstat1.nii.gz from the .gfeat directory reveals
>>>>> that the two t-
>>>>> score maps are highly correlated (0.96 with fslcc), yet differ by
>>>>> far more than
>>>>> can be accounted for by rounding error.
>>>>>
>>>>> Being able to compare FEAT OLS and randomise is important to
>> one of
>>>>> our
>>>>> current studies, so it would be a great help to understand why
>> the
>>>>> t- score
>>>>> maps are not identical, and what, if anything can be done to
>> make
>>>>> them
>>>>> identical, so that we can then compare p-values obtained for
>>>>> clusters,
>>>>> knowing that we are making a valid comparison.
>>>>>
>>>>> Your insights on this topic would be much appreciated.
>>>>>
>>>>> Robert Kelly
>>>>
>>>>
>>>> -----------------------------------------------------------------
>> ----------
>>>> Stephen M. Smith, Professor of Biomedical Engineering
>>>> Associate Director, Oxford University FMRIB Centre
>>>>
>>>> FMRIB, JR Hospital, Headington, Oxford OX3 9DU, UK
>>>> +44 (0) 1865 222726 (fax 222717)
>>>> [log in to unmask] http://www.fmrib.ox.ac.uk/~steve
>>>> -----------------------------------------------------------------
>> ----------
>>>
>>
>>
>> --------------------------------------------------------------------
>> -------
>> Stephen M. Smith, Professor of Biomedical Engineering
>> Associate Director, Oxford University FMRIB Centre
>>
>> FMRIB, JR Hospital, Headington, Oxford OX3 9DU, UK
>> +44 (0) 1865 222726 (fax 222717)
>> [log in to unmask] http://www.fmrib.ox.ac.uk/~steve
>> --------------------------------------------------------------------
>> -------
>>
>
---------------------------------------------------------------------------
Stephen M. Smith, Professor of Biomedical Engineering
Associate Director, Oxford University FMRIB Centre
FMRIB, JR Hospital, Headington, Oxford OX3 9DU, UK
+44 (0) 1865 222726 (fax 222717)
[log in to unmask] http://www.fmrib.ox.ac.uk/~steve
---------------------------------------------------------------------------
|