Print

Print


Dear Diana,

obviously there's no way to reconcile not doing hypothesis tests ever 
with recommendations how to do them. In fact, the original 2016 ASA 
statement does *not* say you shouldn't ever do it, but Wasserstein and 
some others seem to be keen on getting that message across.

In a fairly recent ASA President's statement, Karen Kafadar distances 
herself from it:

https://errorstatistics.files.wordpress.com/2019/11/kafadar-2019-1.pdf

See also Deborah Mayo's extended discussion of a 2019 paper by 
Wasserstein et al.'s "ASA II statement" where they go further in this 
direction:

https://errorstatistics.com/2019/11/04/on-some-self-defeating-aspects-of-the-asas-2019-recommendations-on-statistical-significance-tests/ 


Personally I'm with those who think misuse of p-values and tests is not 
the tests' and p-values' fault in the first place but of those who 
misuse them. There are admittedly many, many instances and possibilities 
to misuse them, however in my view this has more to do with the fact 
that statistics in very difficult indeed, with the current reward 
system, and that people seem to love simple black-and-white messages 
where more balanced discussions would be more appropriate. If Bayesian 
stats were as popular as p-values, we'd see it misused to more or less 
the same amount (don't get me started on the difficulty and pitfalls of 
designing a convincing prior and how often in the Bayesian literature 
this is not done).

Best wishes,

Christian

On 27/04/2020 16:41, Kornbrot, Diana wrote:
> Am writing experimentalist's guide on Open Access for Mss. and data
> But found conflicting advice
> 1. Null-hypothesis testing should not be conducted - ever.
> Wasserstein, R. L., & Lazar, N. A. (2016). The ASA Statement on p-Values: Context, Process, and Purpose. The American Statistician, 70(2), 129-133. doi:10.1080/00031305.2016.1154108
>
> 2. The rationale for the sample size should be given (e.g.  an a priori power analysis)
> Aczel, B., Szaszi, B., Sarafoglou, A., Kekecs, Z., Kucharský, Š., Benjamin, D., . . . Wagenmakers, E.-J. (2020). A consensus-based transparency checklist. Nature Human Behaviour, 4(1), 4-6. doi:10.1038/s41562-019-0772-6
>
>   A priori power analysis calculates N for a null hypothesis test with specified sensitivity and specificity
>
> How can these recommendations be reconciled?
> What is the best way of choosing a sample size without any reliance on null-hypothesis tests?
> Particularly if investigator does not have access to Bayes Software, or is a frequentist at heart
>
> Many thanks for any help
> best
> Diana
>
> ____________
> University of Hertfordshire
> College Lane, Hatfield, Hertfordshire AL10 9AB, UK
> +44 (0) 208 444 2081
> +44 (0) 7403 18 16 12
> [log in to unmask]<mailto:[log in to unmask]>
> http://dianakornbrot.wordpress.com/
> http://go.herts.ac.uk/Diana_Kornbrot/
> skype:  kornbrotme
> Save our in-boxes! http://emailcharter.org<http://emailcharter.org/>
>   __________________
>
>
>
>
>
>
>
> You may leave the list at any time by sending the command
>
> SIGNOFF allstat
>
> to [log in to unmask], leaving the subject line blank.
>
-- 
Christian Hennig
Universita di Bologna, Dipartimento di Scienze Statistiche "Paolo Fortunati"
[log in to unmask]
+39 051 2098163

You may leave the list at any time by sending the command

SIGNOFF allstat

to [log in to unmask], leaving the subject line blank.