Print

Print


Hi Eric,

Please, see below:


On 3 August 2016 at 02:27, Eric Walden <[log in to unmask]> wrote:

> I used randomize with 10000 iterations to do a one sample t test.  I was
> surprised to find MORE areas of activation than FLAME1+2.  Oddly, the
> additional areas of activation were basically a halo in the front of the
> brain and a lot of activation in the ventricles.  This is basically what
> one would expect from movement artifacts if R Kelly is correct.
> In my other contrast, I found basically the same activation pattern as
> FLAME1+2 gave me with slightly smaller clusters.
>
> You can see the image here:
> http://killough-walden.com/eric/screenshot.png  Red-Yellow is randomize
> while Blue is FLAME1+2.
>
> My question is:  Are permutation tests somehow particularly susceptible to
> motion artifacts?  Or, more importantly, are permutation tests susceptible
> to motion artifacts under certain conditions?
>
>
No. Nor is FLAME. Nor any other test. These tests use the input data that
is given and will be sensitive to whatever is in that data. If the data is
corrupted or confounded by movement, then a powerful test may identify it.



>
> The commands I used were:
> randomise -i filtered_func_data.nii.gz -o OneSampTCope110000 -1 -v 5 -T -n
> 10000
> fslview OneSampTCope110000_tfce_corrp_tstat1.nii.gz
>
> I thresholded at .95 in FSLVIEW.
>

The run above of randomise doesn't compare with FLAME in that it uses a
different test statistic (TFCE) that amalgamates both signal strength and
spatial extent, whereas in FLAME, the test looks into strength only.
Moreover, here variance smoothing was used. To make a fairer comparison,
drop -v and -T, then look into the *vox_corrp* files.

All the best,

Anderson