Subject: | | Re: F-contrasts vs. "planned" t-tests / Vaild approach for ANOVAs |
From: | | Roberto Viviani <[log in to unmask]> |
Reply-To: | | [log in to unmask][log in to unmask]> wrote: >>>> >>>>> thanks,Vladimir. but I am more confused. for -50ms, I understand it >>>>> will be calculated based on (-750,0) and (-50,700),how about time point >>>>> before that. like -500ms,in which time window it will be calculated. Also >>>>> how to calculate the true frequency resolution. finally, in order to best >>>>> capture the pre-stimulus low frequency activities interdependently,what's >>>>> the best way to set, should I use multitaper instead? >>>>> >>>>> thanks >>>>> Jun >>>>> >>>>> >>>>> On Mon, Nov 5, 2012 at 9:03 AM, Vladimir Litvak < >>>>> [log in to unmask]> wrote: >>>>> >>>>>> Dear Jun, >>>>>> >>>>>> Your calculation makes sense but note that with 3 cycles at 2 Hz all >>>>>> your pre-stimulus points will be computed based on both pre- and >>>>>> post-stimulus data. The first point at -50ms will have slightly more than >>>>>> half of its support before the stimulus and the later points even less than >>>>>> that. >>>>>> >>>>>> The frequency axis does not depend of the number of cycles but you >>>>>> can change it manually by specifying your desired frequency bins. You can >>>>>> have more frequency bins than your true frequency resolution and in this >>>>>> case the adjacent bands will not be linearly independent. >>>>>> >>>>>> Best, >>>>>> >>>>>> Vladimir >>>>>> >>>>>> >>>>>> On Sun, Nov 4, 2012 at 4:41 PM, Jun Wang <[log in to unmask]> wrote: >>>>>> >>>>>>> Dear Vladimir, >>>>>>> I am doing TF right now. My data has 800ms prestimulus window. >>>>>>> Based on the spm manual, in order to get low frequency (2Hz), I set >>>>>>> wavelets cycles number to be 3. then (3/2)* 1000/2 =750ms which is within >>>>>>> my 800ms pre stimulus window range. Is this the correct way to set. Also I >>>>>>> noticed if I set frequencies of interest to be [2:48], the output TF >>>>>>> frequecies will be always[2 3 4.....48] no matter how many wavelet cycles I >>>>>>> set. Shouldn't the frequency resolution depend on wavelet cycles setting? >>>>>>> Could you help me out here. >>>>>>> >>>>>>> >>>>>>> thanks >>>>>>> >>>>>>> Jun >>>>>>> >>>>>> >>>>>> >>>>> >>>> >>> >> >[log in to unmask] |
Date: | | Wed, 14 Nov 2012 11:34:32 +0100 |
Content-Type: | | text/plain |
Parts/Attachments: |
|
|
|
|
> Okay, the "neuropsychological test battery" was a really bad
> example. If your battery is large enough, and you correct for
> multiple comparisons, it is very unlikely to find any true effects.
> So I would not have corrected on F-test level (would have reported
> both corrected and uncorrected p-values).
>
> Anyway, I had thought that I would have to take into account the
> number of post-hoc tests (as well), but this seems to be wrong then.
It isn't wrong, it is a problem that is not specific to neuroimaging.
If you correct for the volume at each t-test, then you have the same
problem you'd have if it was the neuropsychological test battery.
Here, the F test does not solve the problem because the comparisons
must be planned. You'd rather need a Bonferroni correction.
The opinions on what to do in this situation vary. In genetic
epidemiology, the tendency is to require large samples and
(Bonferroni-corrected) significances much lower than 0.05 for
credibility. In the political sciences, the view has been expressed
that corrections are harmful because of the effect on type II error.
Others like FDR approaches, which become increasingly attractive when
the number of tests becomes high.
>
>
> Back to the fMRI data. Imagine a purely within-subject 3x3-ANOVA,
> which should be reasonable nowadays. E.g. something like "face"
> (happy, sad, fearful), and "sex" (male, female, morph). Maybe I have
> specific hypotheses, but maybe I do not (at least for some levels,
> e.g. concerning "morph"). In the latter case, I would run F-tests
> for "face", "sex" and the interaction. Imagine I get some clusters
> surpassing an otherwise defined voxel-size threshold. What should I
> do then?
>
>
> Or should I run lots of t-tests right from the beginning? I would
> already have to conduct 12 one-sided tests for "face" and "sex". And
> to ensure that the results make sense, I would have to check all
> the interactions as well.
You could declare your t tests as explorative.
You won't escape the problem by adopting one or the other approach to
correction. To have an intuitive understanding of the inevitability of
the problem, see it as a requirement on the resolution of your data.
If you want high resolution (to figure out which of these many
conditions is responsible for variance), you need more data;
otherwise, you'll be looking at noise.
Best wishes,
Roberto Viviani
Dept. of Psychiatry III
University of Ulm, Germany
|
|
|
|