Hi Darren,

On 24 Feb 2010, at 20:26, Darren Schreiber wrote:

I've been trying to make sense of results I get from featquery on my own for a while now and just keep getting stuck in confusion.  I've read through most of the emails about it on the FSL list, but still haven't managed to get myself to a place where I'm comfortable that I am understanding the output correctly.  I've also tried looking through the FSL manual pages, course documentation, and the materials that Jeanie Mumford has online.

I have a 2x2 design block design where 19 subjects are looking at images in four categories (Aa, Aa, Ba, Bb).  The order of stimuli is randomized for each subject.  What I want to do is to simply generate the typical ROI bar graphs that people use to report the level of activity in the region and include the error bars.  On the individual subject level, I set up a series of contrasts in FEAT:

Aa 1 0 0 0
Ab 0 1 0 0
Ba 0 0 1 0
Bb 0 0 0 1
A 1 1 0 0
B 0 0 1 0

Maybe a typo there for B?

a 1 0 1 0
b 0 1 0 1
A>B 1 1 -1 -1
a>b 1 -1 1 -1

I want to see if there are statistically significant activations or deactivations in the ROIs for the 4 main conditions.  When I look at the output of featquery, however, I'm confused about which of the PE, COPE, or zstat values I want to pay attention to.  My inclination is to report % signal change rather than z-scores since that seems far more common.  But, I'm also unsure of how to generate the error bars out of this.  My rough recollection was that PE1 was the same as the constant in a regression and that PE2 was the slope.  But, I took the FSL course back in 2003 and a lot is very rusty now.

Here is the output of featquery for COPE 1 (Aa) using the Harvard-Oxford Subcortical Structural Atlas Left Amygdala as the ROI:

stats image # voxels min 10% mean median 90% max stddev
stats/pe1 630 -0.3008 -0.05585 0.1007 0.1002 0.2667 0.4243 0.1241
stats/pe2 630 -0.1268 0.07538 0.206 0.2025 0.346 0.5481 0.1059
stats/pe3 630 -0.2435 -0.006169 0.1371 0.1391 0.2888 0.4581 0.1087
stats/cope1 630 -0.07132 0.04812 0.1553 0.1464 0.2726 0.4421 0.08771
stats/zstat1 630 -0.5661 0.2942 1.024 1.018 1.662 3.053 0.6049

If the mean I should be using is PE2 (0.206), then what would be the most logical way of calculating the standard error of that estimate?  Reporting the 10% and 90% alone wouldn't reflect the increasing confidence in the average result that would be obtained as the number of measures increased.

I think this is probably easier than you think - I suspect that you actually want your error bars to be reflecting variation in PE (or % sig. change if you turn that button on) _across subjects_ rather than variation across the ROI.   So for each subject you would take the single measure of most relevance for the ROI - this could for example be the "mean" or the "max" (I would probably go with the latter) and extract that single number for each subject, and then show the cross-subject mean and spread in your figure.

Hope this makes sense?
Cheers.



Thanks,

Darren


*******************************************************************************
Darren Schreiber, J.D., Ph.D.
Assistant Professor of Political Science, U.C. San Diego
Assistant Adjunct Professor of Law, University of San Diego
Political Science, SSB 367
9500 Gilman Drive
La Jolla, CA  92093-0521
dmschreiber (at) ucsd (dot) edu
*******************************************************************************



---------------------------------------------------------------------------
Stephen M. Smith, Professor of Biomedical Engineering
Associate Director,  Oxford University FMRIB Centre

FMRIB, JR Hospital, Headington, Oxford  OX3 9DU, UK
+44 (0) 1865 222726  (fax 222717)
[log in to unmask]    http://www.fmrib.ox.ac.uk/~steve
---------------------------------------------------------------------------