...
> In an ANOVA, does anyone know when one should correct for the number
> of post-hoc tests you are doing following the F-test?
The most common approach in ANOVA is to test the set of contrasts with
an F test, and then proceed to examine individual contrasts without
correction. This approach is said to go back to Fisher and relies on
the correction intrinsic in the F test. This is when one has 'a
priori' contrasts.
If you look in ANOVA textbooks, you'll find that several approaches
have been developed for the post-hoc tests, with Scheffe's post-hoc
corresponding to the strong correction type, as would be formulated in
neuroimaging terminology. Complications ensue from using step-down
strategies etc.
I do not see an easy way of adopting the first approach in
neuroimaging, because F tests are not constrained to the same set of
contrasts across the volume. They will correct for any possible
combination of contrasts, but this combination will be free to vary
from voxel to voxel. To me that seems a huge space of combinations to
correct for, resulting in unrealistic requirements on the effect of
interest.
Scheffe's post-hoc is a strong correction, which you can nest within a
strong peak-level correction if you want -- but the same objections
about power may be repeated here.
I believe an argument may be made for the a priori contrasts case and
not worrying about the multiple comparison problem within the ANOVA on
the following ground (beside the mentioned irrealistic requirements
argument). The origin of the multiple comparison problem in ANOVA and
neuroimaging is of essentially different nature. In neuroimaging
testing, the multiplicity arises in the dependent variables. In ANOVA,
the mutiplicity arises in the independent variables. If you do not
correct and the null is true, you'll be wrong up to close 100% of
times in the former type of multiplicity, but only 5% of times at most
in the latter ('times' refers here to individual tests). As you see,
type I error is bounded by the testing procedure in one case but not
in the other.
I also agree with the view expressed by Stephen Fromm, that most
papers are published even if they do not correct even for the
neuroimaging multiplicity (that does not apply to the papers I submit,
however).
Best wishes,
Roberto Viviani
Department of Psychiatry and Psychotherapy III
University of Ulm, Germany
>
> It seems that doing it different ways produces different results:
> (1) Modify the T-statistic, so it produces the corrected p-value
> (Tukey/Bonferonni)?
> (2) Divide you voxel-wise p-value by the number of tests or other
> correction factor (Tukey/Bonferonni)?
> (3) Use uncorrected voxel p-values, but correct the cluster p-values
> for the number of tests or other correction factor (Tukey/Bonferonni)?
> (4) Ignore the number of tests that you are doing post-hoc (LSD approach)?
>
> Any thoughts would be appreciated.
>
> Best Regards, Donald McLaren
> =================
> D.G. McLaren, Ph.D.
> Postdoctoral Research Fellow, GRECC, Bedford VA
> Research Fellow, Department of Neurology, Massachusetts General Hospital and
> Harvard Medical School
> Office: (773) 406-2464
> =====================
> This e-mail contains CONFIDENTIAL INFORMATION which may contain PROTECTED
> HEALTHCARE INFORMATION and may also be LEGALLY PRIVILEGED and which is
> intended only for the use of the individual or entity named above. If the
> reader of the e-mail is not the intended recipient or the employee or agent
> responsible for delivering it to the intended recipient, you are hereby
> notified that you are in possession of confidential and privileged
> information. Any unauthorized use, disclosure, copying or the taking of any
> action in reliance on the contents of this information is strictly
> prohibited and may be unlawful. If you have received this e-mail
> unintentionally, please immediately notify the sender via telephone at (773)
> 406-2464 or email.
>
|