A few months ago I posed the questions below. Many
thanks to those of you who replied. I attach a text file
listing the responses. Sorry for the delay in forwarding
the attached, in particular to those who requested a copy.
I have made no attempt to post my opinion on the matter as
I think it will only cause a deluge of yet more emails. If
you want my opinion then email me individually. Overall,
opinions were very varied. Read the listing with an open
mind.
With best wishes
Philip Sedgwick
**************************************************************************
I was wondering if anyone could provide me with an
arguement or reference (preferably both) as to why p-values
can never be exactly equal to zero.
Does anyone have any thoughts or comments on the following:
Some colleagues object to me presenting a p-value as
p=0.000 (even if I indicate that I am presenting to only
three decimal places) and prefer me to use p<0.0001. I do
not like the latter as I think it ultimately encourages or
endorses the use of stars or NS (Not significant) and S
(Significant) to represent p-values. Furthermore, if one
takes that view that one can never present p=0.000 to
represent a p-value to three decimal places, surely I can
never present any so-called 'exact p-values'?
Somewhere in the depths of my memory I recall that someone
was trying to arrange a one-day conference on p-values. Has
there been any progress?
**********************************************************************
-----------------------------------
Dr. Philip Sedgwick
Lecturer in Medical Statistics
Department of Public Health Sciences
St. Georges Hospital Medical School
London SW17 0RE
Email: [log in to unmask]
Telephone: +44 20 8725 5551
Fax: +44 20 8725 3584
1. Isaac Dialsingh
An easy way to 'know' that the p value will never be zero is to look at both the t and normal distribution---they don't touch the x axis-although in some books they appear to. The x axis is really an asymptote. Recall that for the normal distribution, the range of x values is between negative infinity and positivie infinity.
But bacuse most programs only cater for 3 or 4 decimal places, they will normally show 0.000 or 0.0000 if the pvalue is smaller.
Also some books differentiate between these values:
i also came across a text (I can't renmember the name) that said somethinthing about significance levels:
0.05 significant
0.01 very significant
0.1 not very signicant
2. Peter Lane
Research Statistics Unit, SmithKline Beecham
I personally advocate the representation of p-values as follows:-
(1) If value >= 0.001, give probability to three decimal places because there is no need for more accuracy in most applications,whereas two decimal places seems to me to be insufficient.
(2) If value < 0.001, state p<0.001 because I don't see any point in establishing smaller probabilities in most applications; however, this does not apply to the representation of probabilites of rare events, rather than of significance probabilities.
I think that neither of the following should be tolerated:-
(3) Do not use: p < 0.05 because this reduces information (compared to, say, p=0.045); probabilities can be calculated exactly now so do not need to be looked up in tables; this form also forces pre-set significance levels on the consumers of statistics.
(4) Do not use: NS because this reduces information, and forces a pre-set significance level.
I don't think that the following form should be used either:-
(5) Do not use: p=0.000 because this confuses consumers of statistics who will think that it means something is impossible, even if you state what rules you have used in terms of rounding.
3.
Jay Warner
Principal Scientist
Warner Consulting, Inc.
The p value is the areea under the (usually normal) density function/curve from the given value to - inf, or +inf, which ever tail we are on. As you know, the density function never exactly reaches 0. Therefore, the area under the curve can never reach 0.
As a practical issue, the tails of the curve are usualy the first to deviate from normal, and sometimes that deviation will be to go to 0. Although we would be hard pressed to show data to that effect, as the area in that tail region is usually miniscule anyway. Excel will
calculate it with NORMSDIST, NORMDIST, NORMINV, and some others. Try it
on the left (negative) side.
> Does anyone have any thoughts or comments on the following:
> Some colleagues object to me presenting a p-value as
> p=0.000 (even if I indicate that I am presenting to only
> three decimal places) and prefer me to use p<0.0001. I do
> not like the latter as I think it ultimately encourages or
> endorses the use of stars or NS (Not significant) and S
> (Significant) to represent p-values.
Only if you let it. A very tempting thing, to say the dichotomy, yes it
is; no it is not. But still, as you suggest, incomplete.
> Furthermore, if one
> takes that view that one can never present p=0.000 to
> represent a p-value to three decimal places, surely I can
> never present any so-called 'exact p-values'?
p < 0.001 is exact?!
4. Ronan M Conroy
Lecturer in Biostatistics
Royal College of Surgeons
>I was wondering if anyone could provide me with an
>arguement or reference (preferably both) as to why p-values
>can never be exactly equal to zero.
This just isn't true. If you look at a condition like ventricular
fibrillation, which was 100% fatal before the advent of DC shock, and you
were to examine the results of the first resuscitation (patient
survives), the chances of observing that result under the null hypothesis
(fatality rate is 100%) are zero.
In PRACTICE we write p < 0.001 when the first 3 digits are zero.
5. Dr Brian G Miller
Director of Research Operations
Institute of Occupational Medicine
To me the interesting and fundamental question here is this: if your
(presumably non-statistical) colleagues are coming to you for expert input,
as they should, why won't they accept it when you give it? This surely says
something about the professional status of applied statisticians in the eyes
of medics or other scientists: not a new topic, but a seemingly endless
source of frustration!
Be brave! Best wishes
6. Peter Das
Netherlands
Natural scientists have the useful rounding convention of the 'significant digits'. This means that the number reported as, say 4.53 is taken to mean 'at least 4.525 and less than 4.535' . The rounding takes place with the accuracy of the measurement or calculation in view.
Accepting this convention, a result reported as 4.53 is different from a result reported as 4.530 because the latter implies in the interval [4.5295 .. 4.5305) where the angular and curved brackets [ and ) imply including and excluding the endpoint, respectively. Following these conventions the result 0.000 must mean in the interval [-0.0005 .. 0.0005) and since negative probabilities are impossible [0.. 0.0005).
Your presentation follows this convention. Nothing wrong with that, except that your public may need education in this convention. You will not find a reference why p-values cannot be exactly zero. That is because events are possible while having probability exactly zero .
Such an event is the drawing 'at random' of a rational number from the collection of all real numbers between 0 and 1 .
7.
Paul T Seed
Departments of Obstetrics & Gynaecology & Public Health Sciences,
Guy's Kings and St. Thomas' School of Medicine,
King's College London,
>I was wondering if anyone could provide me with an
>arguement or reference (preferably both) as to why p-values
>can never be exactly equal to zero.
No, but I can give you a counter-example:
H0 "There are no black swans" or p_s = 0
Data: 100 swans , 1 being black
p = 1- 100C0 * 1^100 * 0^0 (Exact binomial distribution) = 1- 1*1 = 0
This is based on a famous philosophical problem from (I think) the middle ages, before Captain Cook reached Australia & black swans were known in Europe).
Usually under H0, no single observed value has probability 0; so no combination of observed values has probability 0, so p > 0 from which the general finding follows.
>Does anyone have any thoughts or comments on the following:
>Some colleagues object to me presenting a p-value as
>p=0.000 (even if I indicate that I am presenting to only
>three decimal places) and prefer me to use p<0.0001. I do
>not like the latter as I think it ultimately encourages or
>endorses the use of stars or NS (Not significant) and S
>(Significant) to represent p-values. Furthermore, if one
>takes that view that one can never present p=0.000 to
>represent a p-value to three decimal places, surely I can
>never present any so-called 'exact p-values'?
I think you are right. However, most journal editors and most co-authors are unhappy with p=0.000 and I often have to replace it with p < 0.000. I try to give CIs, and omit p-values altogether.
Your point about starred values is well taken, and I may fight it harder in future.
8.
David Matthews
A p value is the area under the tail of a curve between a point on the horizontal axis and infinity. For p to go to zero, the point would have to go to infinity. That situation is of interest only to theoreticians.
9. Pat Gaffney
p-value = Prob of seeing something as, or more, unusual as what you saw.
In a discrete distribution, this is always > 0 (since the prob of seeing "what you saw"
must be non-zero).
In a continuous distribution, it's possible [I guess] to have a zero p-value if the distribution
is a truncated (cut-off at some point) and you observed the cut-off value. But for must distributions we deal with, it will be non-zero.
In your example of p=0.000, you should say p < 0.001 (not 0.0001). You probably could say
p<0.0005 (since if it was 0.0006, you'd probably see 0.001 on the output from your statistical package [rounding]). I see not problem with reporting it as p<0.001 [very very strong evidence]. Perhaps the problem is with the package since you clearly want it to report p=0.0000000 is the p-value is almost zero.
10. Louis G. Tassinary,
Texas A&M University
No reference to offer but I will share an argument that I've heard and been persuaded by. The use of just zeros implies that there is no uncertainty. The use of < makes it explicit that there is some uncertainty, albeit small, without having to list varying numbers of decimal places. If the scientific community generally were more accustomed to using scientific notation for p-values this wouldn't be an issue (e.g., p=3D1.36 x 10^-6). In addition, there is nothing wrong with binary thinking (i.e., NS & S) when your primary purpose in using statistics is to assist you in making a decision, not estimating an effect.
11.
N.C.Allum
London School Economics
A p-value of 0 would mean that, assuming the null hypothesis is correct, it is *impossible* that we would observe the data. No statistical test would want to be interpreted thus. Why not just report the standard errors and confidence intervals and let the reader decide - then you don't have the problem?
12. Teddy Seidenfeld
You don't give the details but surely, as an example of a zero p-value, consider when the outcome is outside the sample-space under the null hypothesis but consistent with the alternative. Then the P-value is 0, trivially!
For example, wtih X distributed Uniform [0, theta), for a null hypothesis H: theta = k > 0, and alternative hypothesis H': theta > k, the observation, e.g., X = k+1 is maximally "discrepant" with H, etc., and is in the alpha = 0 level (UMP) rejection region for this null hypothesis against each alternative that is consistent with such an observation, e.g., theta > k+1. Given this observation, the likelihood for the null is 0, and the observed P-value (for any reasonable test) is 0 as well, I contest.
Does this situation apply in your case?
13. Zoann J Nugent
Dundee Dental Hospital & School
> I was wondering if anyone could provide me with an
> arguement or reference (preferably both) as to why p-values
> can never be exactly equal to zero.
The simplest explanation is that, if you data exists, it cannot have a p value of zero.
The only time you get p=0 is in philosophy:
Hypothesis: all swans are white.
Fact: Black swan
> Does anyone have any thoughts or comments on the following:
> Some colleagues object to me presenting a p-value as
> p=0.000 (even if I indicate that I am presenting to only
> three decimal places) and prefer me to use p<0.0001. I do
> not like the latter as I think it ultimately encourages or
> endorses the use of stars or NS (Not significant) and S
> (Significant) to represent p-values. Furthermore, if one
> takes that view that one can never present p=0.000 to
> represent a p-value to three decimal places, surely I can
> never present any so-called 'exact p-values'?
Yes, you can have exact p values. BUT, if p=1 or 0, you are no longer dealing in statistics. SPSS is especially guilty of p=0.000. If you have the data to do the test on, the outcome
cannot have a p of 0, only a very small p.
14.
Phil Woodward
Pfizer Central Research
The meeting on p-values is now tentatively scheduled for October 18 at Errol St. More details will be posted on Allstat once finalised.
It seems to me that your colleagues are being rather pedantic, since I would hope that most numerate people would interpret p=0.000 (3dp) as p < 0.0005 anyway.
15. John Whittington,
Mediscience Services, Twyford
It seems to me that your ALLSTAT question is, in fact, two or three - one about probability theory, one about notation and (maybe) one about 'exact' p-values. This is how I see it ....
>I was wondering if anyone could provide me with an
>arguement or reference (preferably both) as to why p-values
>can never be exactly equal to zero.
In theory, probability levels obviously can be zero in some cases. p(A and B) is obviously going to be zero if A and B are mutually exclusive (e.g.the probability of an individual being both live and dead).
A little closer to the practical world, if one had a null hypothesis of the form H0:A is always true, then it only requires the finding of one case in which A is not true to enable the hypothesis to be rejected with p=0.
In the everyday real world of hypothesis testing, of course, all the commonly used frequency distributions have 'infinite tails' - so that, in a literal sense, 'p' can never be exactly zero.
>Does anyone have any thoughts or comments on the following:
>Some colleagues object to me presenting a p-value as
>p=0.000 (even if I indicate that I am presenting to only
>three decimal places) and prefer me to use p<0.0001. I do
>not like the latter as I think it ultimately encourages or
>endorses the use of stars or NS (Not significant) and S
>(Significant) to represent p-values.
This seems nothing more than a question of conventions regarding notation. When you right p=0.000, I presume you mean 'zero to 3 decimal places' - in other words p<0.0005 - so I don't see that it would make any difference if you wrote that as 0.000 rather than <0.0005. Statistical software often uses the '0.000' approach, but only really for convenience, and one doesn't usually see it in published papers - and, in any even, it would be just as tempting a p<0.0005 for someone obsessed with 'stars' and S/NS !! For publication purposes, most journals will ahve their 'house style' for this sort of thing, anyway!
>Furthermore, if one
>takes that view that one can never present p=0.000 to
>represent a p-value to three decimal places, surely I can
>never present any so-called 'exact p-values'?
If (as I suspect), when you say 'exact p-values', you are simply talking about giving an 'actual' p value rather than categorising it ('NS' <0.05, <0.0001 etc.), then this is really just the notation issue I've discussed above.
However, if you are talking about truely 'exact' p-vales, derived from permutation calculations for a discrete distribute (e.g. 'Fisher's Exact Test') then, of course, there is no guarantee that you would be able to express the p value PRECISELY with a finite number of decimal places (p=(1/3) is probably the most simple example!). I have a feeling that, because of the nature of permtation calculations, such a p-value will always be a rational number (and hence cannot be zero) so could always be represented precisely as a/b, but not necessarily as a decimal number.
... that's my two pence worth, anyway!!
16.
Martin Bland
Dept of Public Health Sciences
St. George's Hospital Medical School
P values can be exactly zero. It depends on the null hypothesis you are trying to test.
For example, if I were to test the null hypothesis that no people are green, and I find a green person, the probability of this under the null hypothesis is exactly zero. (I have actually seen a green person, while crossing the storm-tossed Channel on a hovercraft!)
On the other hand, if I were to test the null hypothesis that two infinite populations had the same proportion with something, no matter how different the sample estimates and how large the samples, it would always be possible to get the sample difference if the null
hypothesis were true. It would be very unlikely, but possible.
Most applications are of the second type.
Further, if we use a large-sample approximation, such as chi-squared or Normal, these distributions have infinite range and so the probablity of exceeding any specified
value must exceed zero. When using these approximations, the fit of the distribution usually gets worse and worse as we move further along the tail. When P is very small, we
don't really know what it is, only that it is small.
I think that it is important to remember that the consumers of statistics are often people to whom numbers do not speak, e.g. doctors. They do not understand the distinction between 0, 0.000, and 0.0000.
17. David Braunholtz
Department of Public Health & Epidemiology
University of Birmingham
p=0 (p-values arise, remember, from a hypothesis test) means that the data is structurally impossible if the hypothesis/ model are true. I can't imagine formal statistical tests would be necessary to discover such circumstances in practice.
I would have it that hypothesis testing is only conceivably useful if rejecting/not the hypothesis is directly associated with some DECISION choice. Utilities for outcomes ought to come into it somewhere as well - so surely a critical value for p should be chosen which is optimal for the particular decision. Of course the null hypothesis is rarely the relevant one for a decision ...
Much better to simply report the data sufficiently for people to do their own 'decision analyses' ? (eg estimates, SDs, SEs, Ns)
18.
Roger Newson
Department of Public Health Sciences
Guy's, King's and St Thomas' School of Medicine
In principle, a P-value can be exactly zero. The formal definition of a P-value is the probability of observing a result at least as improbable as the result observed, assuming that the hypothesis is true. (We both know that this is the most difficult statistical concept to get across to medical students.) If the null hypothesis predicts a zero probability of the result observed, then the P-value is indeed zero. For instance, if I have a hypothesis saying that the sun always rises on a day (defined as a 24-hour period), and one day the sun doesn't rise (because I am visiting the Arctic in mid-winter), then the P-value for this daily datum, under my hypothesis, is exactly zero.
However, most useful statistical hypotheses allow almost any result to be at least possible. A normal distribution of height, for instance, allows people to be 10 or -10 miles tall with a small but nonzero probability. In this case, if we observe Alice in Wonderland when she is over a mile high, and have a hypothesis that she was sampled from the human population of England in Francis Galton's time, then the P-value is tiny, but not zero.
Another thought on P-values. The ideal way to present them in a paper is to 4 significant figures, rather than 4 decimal places. So a P-value of 0.0000001576 would be presented as such, or (in scientific notation) as 1.576*10^-7. This is the way to go if you really need to emphasize just how tiny a P-value is. For instance, you might be doing a fishing expedition for genes predisposing to autism (in a case-control study), and one allele (out
of 25 in the fishing expedition) might have such a P-value. It then matters that you would have had to scan over a million genes, rather than 25, to get such a result by chance.
You can probably get away with presenting things in scientific notation, or at least in significant figures, if you are presenting to molecular geneticists, who can presumably be expected to be at least semi-numerate. However, if you fear that you are presenting to total innumerates, with whom you have to struggle to get them to present their main results to a consistent number of decimal places, then it might confuse them if we
ourselves present P-values to inconsistent numbers of decimal places. In this case, if your Stata output gave you P=0.000, then the compromise I use is to present the value as "P<0.0005" (because P=0.0005 would presumably round up to P=0.001). It usually makes sense, of course, to present confidence limits as well.
19.
Kevin McConway
The Open University
> I was wondering if anyone could provide me with an=20
> arguement or reference (preferably both) as to why p-values=20
> can never be exactly equal to zero.=20
>=20
Aha, a challenge!
The following is basically off the top of my head so may be rubbish. I'd say that in principle p-values CAN be zero, but in practice it doesn't happen.
Example where a p value is zero. Model: single observation from a continuous uniform distribution on [0,\theta]. Null hypothesis: theta,\ Test statistic; the value of the observation. Let's say we're doing a one-sided test where the alternative is values of \theta bigger than one. Then if one observes x, which happens to be between 0 and 1, the p value is 1-x. If one observes x which happens to be bigger than 1, the p value is exactly zero.
Explanation of why this is untypical. It's because an observation like x is impossible under the null hypothesis. Once you know that x you KNOW the null must be false. In pretty well every real-life testing situation, that sort of thing can't happen. E.g. with a test for the mean and normal data, it can, according to the model, happen that the sample mean is 10 million under the null hypothesis that the population mean is zero.
20. Miland Joshi
Department of Epidemiology and Public Health
University of Leicester
The probability models chosen for test statistics that are commonly used in practice only allow for the p-value to be negligible, not zero, e.g. for a Normal distribution, the domain is the whole real line, and so any range of values is theoretically possible. Even with discrete distributions, e.g. binomial, the probability of getting zero or the maximum value is not zero
because the size of n used in the probability model is finite. But I am certain many people will say that it may be better to quote confidence intervals as well as p-values, to lessen the danger of relying to much on the latter.
21.
R. Allan Reese
Graduate Research Institute
Hull University
P values can be identically zero, but the estimates based on sampling
cannot prove that.
Eg: P(angles of plane triangle add up to 181 degrees) = 0
P(all swans are white) < 0.000... based upon observations in northern
hemisphere. Then Cook went to Australia.
P=0.000 in printed output indicates P<0.0005, not 0.0001 as you wrote. The objection in reports would be because the computer printout has a pre-set format, but in reporting you may choose, so the reader cannot know that .000 does mean ".0005 or less".
Should you write P<0.0005? In my view it's spuriously precise. That figure relates to a theoretical distribution, and specifically to the tail. Even if it were "accurate" for that sample (ie, if you used an exact test based only on exchangability), is the figure of interest? I suggest it is included (a) because it's conventional and editors follow formats, (b) to give an air of scientific respectability and establish that you are writing in the genre, and (c) to indicate the basis for your conclusions. I put (c) last, because if your results have been accepted by the editor and referee, I cannot see that any reader will quibble
about a conclusion because P<0.005 or P<0.0005. Conversely, I think far too much emphasis is placed on P values by investigators deciding what to report. *ALL* P values need interpreting in a quasi-Bayesian sense: if you didn't expect to find any effect, why did you choose to make *those* measurements?
A fashionable view is that P values must be quoted because your results will subsequently by incorporated into meta-analyses. My view is that this falls exactly into the fallacies (a) and (b) above. One approach to controlling the avalanche of self-publicising rubbish (sorry, I mean the information explosion due to massive worldwide scientific investigation),
will be to reduce all formal publications to "executive reports" where authors spell out their results and claim to novelty and importance, and to make full technical reports available to referees or, by request, to readers who want to build on the work. And the technical reports should be honest, not sanitized post-rationalizations.
22.
Robert Newcombe.
University of Wales College of Medicine
Cardiff CF14 4XN, UK.
Interesting question. This issue should probably be regarded as one of the reasons why p-values should be superseded by (point and) interval estimates of relevant effect size measures as the primary means of presentation of research findings - admittedly, there are much stronger reasons for this conclusion, notably just what do you infer if significant, what do you infer if not significant, the role of sample size in all this, and what to do about multiple comparisons, all issues that the user community out there don't cope with particularly well.
I think the answer is that traditional rounding conventions implicitly assume that equal and opposite small rounding errors are of equal importance, and are therefore inappropriate when you're close to a boundary. Whether we choose to round to 3 or to 4 decimal places, what is currently the prevalence of new variant CJD in our population? Wouldn't we lose a lot of information (and wouldn't we have played into the hands of yesterday's politicians) if we rounded this to 0.000 or 0.0000? I accept that the two situations are
different in that you would also want to give an interval estimate in the latter situation - but that's a logically separate issue.
Incidentally, the issue of CIs for small proportions is one in which a traditionally widespread practice is shown to be nonsensical because it leads to a p-value that is exactly zero. Suppose we observe a prevalence of 1 or 2 out of some large n. Then the
simple Wald confidence interval calculation gives a negative lower limit. It is common practice to round this to zero. But a moment's reflection shows that this is totally inappropriate - the probability of ending up with 1 or 2 positive in a sample of n is exactly zero if the true population proportion is zero, or to put it another way, the fact that it has ever happened totally rules out a population proportion of zero. This is the sort of p-value that really is zero (i.e. a nonsense) - whereas p-values quoted by software as "0.000" should be read as <0.005, and should be treated with great care, as I suspect that a lot of journal readers out there don't understand the implications of these rounding conventions. Indeed, on reflection I'm surprised that you don't more often see these rounded to p=0 or 0.0 in articles - considering that there's software about, notably Excel, that likes suppressing trailing zeros.
I fully agree we should discourage the asterisk convention. But I think that p-values should be rounded intelligently, i.e. not to a fixed number of decimal places. Especially as showing extremely low p-values in an informative way gives some idea of how much
adjustment for multiple comparisons the directly calculated p-value can withstand. An extensive interchange of views on allstat a few years ago showed a widespread distaste for automatic methods to adjust for multiple comparisons, suggesting it is usually preferable to keep p-values as they are calculated and interpret in the light of how many comparisons were performed or contemplated.
How about rounding p-values to 2 significant digits if >0.01, otherwise 1 significant digit? Very low proportions such as prevalences should probably be shown to 2 significant digits, to enable (informal) comparison with other series, but 1 seems to be adequate for a very low p-value.
23.
Trevor Lambert
Institute of Health Sciences,
Oxford University.
Not sure about a reference. If a null hypothesis is based on a continuous distribution such as the normal, t, F, chi-square etc. whose tails extend to infinity, the tail area can never be equal to zero (assuming that the test statistic takes a finite value). If a computer package prints p=0.000 I tend to report p<0.001, though I suppose p=0.000 really means p<0.0005, otherwise the result would have been rounded and reported as p=0.001. Maybe it was a typing error in your email, but I think it is wrong to report p=0.000 as p<0.0001, because
p=0.0002 for example would be reported as p=0.000.
24. Paul Taylor
University of Hertfordshire
p-values can be zero! If what you observe is impossible under your null hypothesis then the p-value is zero. If what you observe is not impossible under the null, then the p-value cannot be zero. Reasoning is directly from the definition of the p-value, i.e., Pr(Observe test stat as extreme as the one you got | H0 is true).
Most cases that we consider allow the test stat to take any value under H0, and so the test stat that you obtain from your data is not impossible under the null.
Re: presentation---if you get p=0.000 then what you know is that p is less than 0.0005, which I would use as it is pretty unambiguous, and it prevents other people reading p as zero rather than zero to 3dp. (Note that you cannot say p<0.0001, based on p being 0.000 to 3dp.)
25. Don Brown
EUTECH
p-values are mathematically the result of an integral which is exponentially decreasing and never gets to zero except at infinity. To state p=0.0001 when it is actually 0.0000001gives a false message. However at these values the conclusion is 'NS' for all practical purposes I ever come across. I suggest instead of stating p=0.000 or 0.0001falsely, say p<0.0005 which is true. There will be people who will say you should only be quoting confidence
intervals rather than p values anyway!
26.
Ruth M Pickering
University Of Southampton
I have always viewed people taking your colleagues view as overly pedantic, but if they are refereeing one's paper or are respected Professors it is just as easy to present P values as they would like to see them and I can't see the point in worrying about it. I have never been aware of the basis of the view and if you find it out please let us know, and I will rearrange my attitude accordingly.
If you are changing P=0.000 to suit such people I have always assumed it should become P<0..0005, since P>0.0005 but P<0.001 would have been rounded up (note I'm just writing this quickly and haven't given due consideration to P=0.0005- just in case you are pedantic too).
When I report P values I like them to have the same number of decimal places (either 3 or 4) throughout the paper or whatever and this is some explanation for preferring P=0.000 rather than P<0.0005.
If this turns out to be 'incorrect practice' I hope I won't be publicly vilified.
27. Iain Buchan
CamCode & University of Cambridge
The problem with calculated P of 0 is that it is either a truncation error of the computation or an error in the method. For P to be zero, you have to make inference that the effect in question is impossible, in which case you should not be running a test in the first place. A
statistical catch 22.
I used to use the asterisk rating system in software output, but was criticised for it being "not consensus".
1
1
|