Related to presentation of TFCE/TBSS results--- well more related to
the statistics--- is there any rule of thumb about the number of N
(assume a simple 2 group design) to actually SEE significant results
after correcting for the entire brain /multiple comparisons?
Since my understanding is the randomize algorithm basically shuffles
the group assignments to figure out the null distribution... in order
to survive correction for 1000's or 10's of thousands of multiple
comparisons (I'm not 100% familiar with the actual number of
independent DOF that the TFCE model uses) ... there needs to a
relatively large group of subjects to even have a shot.
Say I was hypothesizing that some white matter voxels that connect to
the hippocampus were 10% "less" (say a couple of voxels has an FA
value of 0.6 and the other's had a FA value of 0.7 or something...)
how big (ballpark) would your groups need to be before you'd see
anything. I realize it depends on the number of contiguous
voxels/etc... I am just looking for a nice ballpark
I sometimes feel like I'm "cheating" by looking at the tfce_p_tstats
vs the corrp_t_stats (bearing in mind people report uncorrected P
values at P<.001, p<0.005 or whatever all the time).....
On Thu, Jul 22, 2010 at 9:30 AM, Reza Salimi <[log in to unmask]> wrote:
> Georgios,
> just to add to Matthew's answer:
> the reason raw TFCE is not enough to reject an H0 is that it does not have a
> known/epxected (a piori) distribution, such as T or Z, so that you can take
> TFCE value and convert it into a P-value.
> Therefore, for the inferene on a TFCE value, you need a
> permutation-based-generated H0, i.e., a nonparametric inference ...
> In order to have the TFCE image, you can either use -R option in randomise
> command OR feed your T-stat image to fslmaths, which can convert it to a
> TFCE image given E, H and connectivity parameters .
> Cheers
>
> On Thu, Jul 22, 2010 at 2:10 PM, Matthew Webster <[log in to unmask]>
> wrote:
>>
>> Hello,
>> The raw statistic contains the TFCE scores for the
>> unpermuted-data ( permutation 1 ). By permuting, randomise is able to
>> generate the null-distribtution for these TFCE scores and so calculate
>> p-values for the original ( raw ) statistic image. It is mostly far more
>> appropriate to present these p-values than the raw TFCE-scores.
>>
>> Many Regards
>>
>> Matthew
>>
>> > Hello TBSS experts,
>> >
>> > I have a question concerning how valid it is to present raw test
>> > statistics.
>> >
>> > After running randomise I get the three following thresholding/output
>> > options:
>> > a) _tfce_corrp_tstat (FWE - TFCE corrected for multiple comparisons)
>> > b)_tfce_p_tstat (TFCE - uncorrected for multiple comparisons)
>> > c)_tfce_tstat (TFCE - raw test statistic)
>> >
>> > In the literature, it is common that both corrected and uncorrected TBSS
>> > results (voxel-wise and TFCE) are presented with the latter validation of an
>> > application of a ROI analysis, in the regions identified as having
>> > statistically significant differences.
>> >
>> > So my question is, can one present these _tfce_tstat (TFCE - raw test
>> > statistic) results and use a ROI analysis to back them up or is this
>> > inappropriate?
>> >
>> > Additionally, could someone please point me in the right direction as
>> > where to read up on these "raw test statistics" because I can't find any
>> > information in the archives or in the randomise manual.
>> >
>> > Thank you for your insight and help,
>> >
>> > Georgios Alexandrou M.D.
>> > Karolinska Institute
>> > Astrid Lindgren Children's Hospital,
>> > Stockholm, Sweden
>> >
>
>
>
> --
> Reza Salimi-Khorshidi,
> DPhil Candidate, Dept. of Clinical Neurology, University of Oxford (Linacre
> College).
> [log in to unmask]
> FMRIB Centre, John Radcliffe Hospital, Oxford OX3 9DU
> Tel: +44 (0) 1865 222704 Fax: +44 (0)1865 222717
>
--
David A Gutman, M.D. Ph.D.
Center for Comprehensive Informatics
Emory University School of Medicine
|