As Steve pointed out, in order to estimate the sample size needed to
achieve a given statistical power, you need some idea of the size of
group difference you're expecting to find, as well as the
between-subjects variability. With TFCE these are not straightforward,
since the TFCE score represents both magnitude and spatial extent.
There are programs that will allow you to estimate statistical power
for more conventional analyses under certain assumptions. For example,
some use Monte Carlo simulations using synthetic data based on
assumptions of a Guassian distribution of voxel values with a given
spatial smoothness. Such software allows you to estimate power for
conventional voxelwise thresholding, as well as for conventional
cluster-based thresholding. My suggestion would be to use these to
estimate how many participants you would need to achieve reasonable
statistical power for these more conventional tests. Since TFCE should
yield more statistical power, you would if anything be slightly
over-estimating sample size, which wouldn't do you any harm.
-Tom
On Fri, Jul 23, 2010 at 8:38 AM, Stephen Smith <[log in to unmask]> wrote:
> Hi - normally when people worry about sample sizes the most important factor
> is the relative sizes of the expected effect (or effect difference) and the
> cross-subject variability. Technical issues such as the p-value
> 'resolution' in permutation testing (as limited by the number of subjects as
> you rightly point out) are secondary. I'm afraid the answer to the first
> question is totally dependent on what factor (disease, plasticity, etc) you
> are investigating!
> Cheers.
>
>
> On 22 Jul 2010, at 16:35, David Gutman wrote:
>
> Related to presentation of TFCE/TBSS results--- well more related to
> the statistics--- is there any rule of thumb about the number of N
> (assume a simple 2 group design) to actually SEE significant results
> after correcting for the entire brain /multiple comparisons?
>
> Since my understanding is the randomize algorithm basically shuffles
> the group assignments to figure out the null distribution... in order
> to survive correction for 1000's or 10's of thousands of multiple
> comparisons (I'm not 100% familiar with the actual number of
> independent DOF that the TFCE model uses) ... there needs to a
> relatively large group of subjects to even have a shot.
>
> Say I was hypothesizing that some white matter voxels that connect to
> the hippocampus were 10% "less" (say a couple of voxels has an FA
> value of 0.6 and the other's had a FA value of 0.7 or something...)
> how big (ballpark) would your groups need to be before you'd see
> anything. I realize it depends on the number of contiguous
> voxels/etc... I am just looking for a nice ballpark
>
> I sometimes feel like I'm "cheating" by looking at the tfce_p_tstats
> vs the corrp_t_stats (bearing in mind people report uncorrected P
> values at P<.001, p<0.005 or whatever all the time).....
>
>
>
>
>
> On Thu, Jul 22, 2010 at 9:30 AM, Reza Salimi <[log in to unmask]> wrote:
>
> Georgios,
>
> just to add to Matthew's answer:
>
> the reason raw TFCE is not enough to reject an H0 is that it does not have a
>
> known/epxected (a piori) distribution, such as T or Z, so that you can take
>
> TFCE value and convert it into a P-value.
>
> Therefore, for the inferene on a TFCE value, you need a
>
> permutation-based-generated H0, i.e., a nonparametric inference ...
>
> In order to have the TFCE image, you can either use -R option in randomise
>
> command OR feed your T-stat image to fslmaths, which can convert it to a
>
> TFCE image given E, H and connectivity parameters .
>
> Cheers
>
> On Thu, Jul 22, 2010 at 2:10 PM, Matthew Webster <[log in to unmask]>
>
> wrote:
>
> Hello,
>
> The raw statistic contains the TFCE scores for the
>
> unpermuted-data ( permutation 1 ). By permuting, randomise is able to
>
> generate the null-distribtution for these TFCE scores and so calculate
>
> p-values for the original ( raw ) statistic image. It is mostly far more
>
> appropriate to present these p-values than the raw TFCE-scores.
>
> Many Regards
>
> Matthew
>
> Hello TBSS experts,
>
> I have a question concerning how valid it is to present raw test
>
> statistics.
>
> After running randomise I get the three following thresholding/output
>
> options:
>
> a) _tfce_corrp_tstat (FWE - TFCE corrected for multiple comparisons)
>
> b)_tfce_p_tstat (TFCE - uncorrected for multiple comparisons)
>
> c)_tfce_tstat (TFCE - raw test statistic)
>
> In the literature, it is common that both corrected and uncorrected TBSS
>
> results (voxel-wise and TFCE) are presented with the latter validation of an
>
> application of a ROI analysis, in the regions identified as having
>
> statistically significant differences.
>
> So my question is, can one present these _tfce_tstat (TFCE - raw test
>
> statistic) results and use a ROI analysis to back them up or is this
>
> inappropriate?
>
> Additionally, could someone please point me in the right direction as
>
> where to read up on these "raw test statistics" because I can't find any
>
> information in the archives or in the randomise manual.
>
> Thank you for your insight and help,
>
> Georgios Alexandrou M.D.
>
> Karolinska Institute
>
> Astrid Lindgren Children's Hospital,
>
> Stockholm, Sweden
>
>
>
>
> --
>
> Reza Salimi-Khorshidi,
>
> DPhil Candidate, Dept. of Clinical Neurology, University of Oxford (Linacre
>
> College).
>
> [log in to unmask]
>
> FMRIB Centre, John Radcliffe Hospital, Oxford OX3 9DU
>
> Tel: +44 (0) 1865 222704 Fax: +44 (0)1865 222717
>
>
>
>
> --
> David A Gutman, M.D. Ph.D.
> Center for Comprehensive Informatics
> Emory University School of Medicine
>
>
>
> ---------------------------------------------------------------------------
> Stephen M. Smith, Professor of Biomedical Engineering
> Associate Director, Oxford University FMRIB Centre
>
> FMRIB, JR Hospital, Headington, Oxford OX3 9DU, UK
> +44 (0) 1865 222726 (fax 222717)
> [log in to unmask] http://www.fmrib.ox.ac.uk/~steve
> ---------------------------------------------------------------------------
>
>
>
>
|