Hi all,
My understanding of the p<0.05 thing is that 0.05 was more or less
arbitrarily decided; that is, there wasn't any extensive rationale (by
modern standards at least) for 0.05.
See for instance http://www.jerrydallal.com/lhsp/p05.htm.
Also, critical p-values differ between disciplines, often for very good
reasons. The existence of the Higgs Boson has over 5-sigma confidence,
which puts p somewhere south of 0.0000006.
This wouldn't work in, say, pharmacology because the only way to get that
level of certainty (that anyone has thought of so far) would involve
killing people for the sake of the test. Clearly, that's not acceptable.
I've come to think of p-values as useful, but not as a standard (i.e., 0.05
or bust!). Instead, I think they can be useful if they're calculated from
the data rather than arbitrarily pre-set.
Here's an example of why I think that.
I had a PhD student who, as part of his research, hypothesized various
correlations between design methods and outcome indicators. Of all the
experiments he did, only a very few met p<0.05. Under this approach, he
could have only accepted those few results as somehow "meaningful" and just
thrown out the rest.
Instead, we *calculated* the p-value corresponding to the various data sets
and then rank-ordered the results. In this case we were able to say that
there were some results in which we had "high confidence," others that were
so-so, and a few that were very likely just noise. This in turn pointed us
alone several very interesting avenues of future work:
* were there any similarities between the results in each of the 3 clusters
(by p-value) that might inform *why* confidence varied so much?
* could those similarities shed light on methodological or conceptual
concerns that need to be addressed?
* into which hypotheses would we get the most "bang for the buck" if we get
future funding to continue?
As a result of all that work he did, I'm pretty "confident" that p-values
can be a useful tool, though not in the conventional sense.
\V/_ /fas
*Prof. Filippo A. Salustri, Ph.D., P.Eng.*
Email: [log in to unmask]
Web: http://deseng.ryerson.ca/~fil/
ORCID: 0000-0002-3689-5112 <http://orcid.org/0000-0002-3689-5112>
"Time flies like an arrow. Fruit flies like a banana."
On 24 February 2017 at 06:26, Terence Love <[log in to unmask]> wrote:
> One of the biggest problems in design research, particularly in PhDs is
> the
> use of the p-value in design research statistics.
>
>
>
> This is a much bigger problem than sample size.
>
>
>
> How about banning the use of 'p' in research reports in the design research
> literature and PhDs, especially the use of p<0.5?
>
>
>
> Best wishes,
>
> Terence
>
>
>
> ---
>
> Dr Terence Love
>
> PhD(UWA), BA(Hons) Engin. PGCEd, FDRS, PMACM, MISI
>
> Love Services Pty Ltd
>
> PO Box 226, Quinns Rocks
>
> Western Australia 6030
>
> Tel: +61 (0)4 3497 5848
>
> <mailto:[log in to unmask]> [log in to unmask]
>
> www.loveservices.com.au <http://www.loveservices.com.au>
>
> --
>
>
>
>
>
> -----------------------------------------------------------------
> PhD-Design mailing list <[log in to unmask]>
> Discussion of PhD studies and related research in Design
> Subscribe or Unsubscribe at https://www.jiscmail.ac.uk/phd-design
> -----------------------------------------------------------------
>
-----------------------------------------------------------------
PhD-Design mailing list <[log in to unmask]>
Discussion of PhD studies and related research in Design
Subscribe or Unsubscribe at https://www.jiscmail.ac.uk/phd-design
-----------------------------------------------------------------
|