Print

Print


Hi - it's probably best it Tom comments further on this - however I think that maybe the point in item 1) is that you are passing up an always-positive quantity (like an F value or a sum-of-squares) and then in randomise effectively testing whether this is greater than zero - yes of course it will be!   So contrasts between F's should be ok, but the mean response will just show everything lighting up.   Hopefully if you passed up the zfstats instead, which *should* be zero mean in the null case, this wouldn't happen.

Wrt your query 2), that may simply mean that with this passed-up quantity you just don't have a significant contrast result…..

Cheers.



On 5 Apr 2012, at 17:19, Charlie Giattino wrote:

Dear FSL Team,

Working on analyzing data from an experiment, summarized thusly:
- Auditory pitch discrimination experiment, looking for possible differences between native tone language speakers (19 subjects) and native non-tone language speakers (20 subjects).
- Each session had 56 TRs of 9 seconds each. Sparse sampling design where MR scanned for first 5 seconds of TR and then was inactive for 4 seconds while two tones were played. Subject was to tell whether or not there was a difference in pitch; response was recorded via button press.
- Used 3-column timing file with event onset being the onset of the first tone in the pair and duration comprised just the playing of both tones. Adjusted timing according to this FAQ: http://www.fmrib.ox.ac.uk/fslfaq/#feat_sparse.

I wasn't getting much using default convolution settings (Gamma), so I tried using basis functions/FLOBS. Got better activation, especially in Fstat images, but then ran into problem of using those Fstat images for higher-level analysis. Used past forum post as a guide (copied below) to figure out what to do in Randomise. Followed all the steps listed including: preprocessing/running through FEAT using FLOBS/3-column timing for convolution; prepared Fstat images for Randomise by transforming into standard space (using example_func2standard.mat) and concatenating into 4D image using fslmerge; finally, ran in Randomise using this command: randomise -i 4Dinputfilename -o Outputfilename -d design.mat file -t design.con file -m brainmask file -n 5000 permuations -T for TFCE.

Here is an explanation of the main "weirdness" I'm getting in my results:
1) The mean activations for each group (tone and non-tone) are off the charts. The whole brain is "lit up" in each result, and the activation shown is far more than any single participant had. The threshold for this activation is also very much higher than expected given first-level analysis results.
2) The main contrast I'm looking at (Tone - Non-Tone) is statistically very weak. While looking at the 1-FWE corrected P image, there is some activation (and in the "right" places too), but as soon as I bring the threshold up to say 0.4 (obviously very low), everything disappears.

Any thoughts?

Previous forum post I used as a guide:
I have designed an event-related fMRI paradigm that I plan to use in a depressed group (compared to a control group). I have so far collected data for 9 control participants, and am having a good look at it to try to establish a procedure for analysing the entire data-set once collected.

The main contrast of interest in the paradigm is one comparing a rewarding stimulus with a neutral one. I have so far used FEAT for first- and higher-level analysis: cluster-wise analysis at the group-level produces some significant areas of activation that are in anticipated regions. (Voxel-wise analysis produces nothing: I assume because the contrasts do not produce strong enough activations at the voxel level (the peak Z is about > 3.9). The stimuli are of a somewhat abstract, social type, and I guess this is to be expected.)

Note that if you have a _PRIOR_ hypothesis about where the activation will be then you can always use pre-threshold masking (on the Post-stats tab on the FEAT GUI), to make it easier to find significance when doing multiple comparison corrections.

I am interested (as I suppose most people are) in processing the data in a way that maximises sensitivity to any activations, and want to try using FLOBS at the first level (in FEAT) and sending the resulting fstat images to Randomise for higher-level analysis. I have read as much as I can around the subject, and want to check with you that the procedure I plan to use sounds sensible. I am relatively new to imaging, so it is possible I have misunderstood some quite fundamental principles.

Instead of passing fstats to randomise, you could also try sending the root mean square of the parameter estimates, e.g. for 3 basis functions:

sqrt(b1^2+b2^2+b3^3)

which you can calculate using fslmaths.

Step 1 is preprocessing the individual 4D images. Do I need to do any of the preprocessing differently than I do for regular FEAT analysis, which includes smoothing and filtering, and unwarping the EPIs with fieldmaps? I know the size of the Gaussian kernel one chooses for smoothing is somewhat arbitrary in usual FEAT analysis: I assume the same considerations apply when higher-level analyses will be undertaken using Randomise?

No reason that I can think of to do any different.

Step 2 is creating the fstat images, using a GLM with FLOBS basis functions convolving the events. Relatively straight-forward, I think.

Step 3 is preparing the fstat images for Randomise. I need to transform the individual fstat images for the copes of interest into standard space (using the example_func2standard matrix), and then concatenate them into a 4D image using the fslmerge command. Is this right?

Yes.

Step 4, then, is using Randomise for higher-level analysis. I have a few questions about this:
- even though I am entering fstat images into the analysis, if I am doing a one- or two-sample t-test (to either look at the group means, or compare groups) then to do cluster-based analysis I should use -c or -C (rather than -S or -F)?

Yes, use -c or -C

TFCE is another option for cluster-based analysis. All the discussion on this list concerns using it for TBSS analysis, though I assume I can use it for fMRI analysis?

Yes.

I am keen to try FDR, and understand I should undertake a voxel-wise analysis and feed uncorrected p-values into it. Is there anything else to consider?

Take a look at
http://www.fmrib.ox.ac.uk/fsl/randomise/fdr.html



---------------------------------------------------------------------------
Stephen M. Smith, Professor of Biomedical Engineering
Associate Director,  Oxford University FMRIB Centre

FMRIB, JR Hospital, Headington, Oxford  OX3 9DU, UK
+44 (0) 1865 222726  (fax 222717)
[log in to unmask]    http://www.fmrib.ox.ac.uk/~steve
---------------------------------------------------------------------------