Email discussion lists for the UK Education and Research communities

## EVIDENCE-BASED-HEALTH@JISCMAIL.AC.UK

#### View:

 Message: [ First | Previous | Next | Last ] By Topic: [ First | Previous | Next | Last ] By Author: [ First | Previous | Next | Last ] Font: Proportional Font

#### Options

Subject:

Re: Of p-values and Confidence Intervals

From:

Paul Glasziou <[log in to unmask]>

Reply-To:

Paul Glasziou <[log in to unmask]>

Date:

Mon, 13 Mar 2006 12:37:19 +0000

Content-Type:

text/plain

Parts/Attachments:

 text/plain (49 lines)
 ```At 13/03/2006, Howard Mann wrote: >Dear List, > >When reading a published article I look for the published estimate of the >treatment effect on the primary outcome measure(s), and the associated 95% >C.I., and disregrd any published p-values. > >I have seen the following comment in another Forum : > >"A major reason for including p-values with confidence intervals (CI) >arises when sample data is not normally distributed. P-values are based on a >normal distribution, whereas CIs are calculated from the data itself. In >certain cases, a CI that does not cross zero (for absolute risk comparisons) >is actually *not* significant. This is easily determined if a p-value for >the comparison exceeds the significance (alpha) level." > >What does this mean ? What do you understand by this comment ? I agree with Ted Harding that this is non-sense. 1. There are many different ways to calculate a p-value (eg. with and with continuity corrections in dichotomous data, with and without transformations, parametric and non-parametric, etc). 2. The CI and p-value should correspond IF THE SAME underlying method has been used to calculate both 3. Correspondingly, if different methods are used you may get the anomaly of say a CI crossing the "null" value but a significant p-value. (It sounds as though the person that wrote this did just that, e.g, used an RR for the CI but then used a difference of proportions, say with a continuity correction, to calculate the p-value. They should be close but occasionally just different enough to give his anomaly). >Why should I bother with p-values ? I prefer a CI, but I also can find it hard to distinguish a p of 0.03 from 0.001 from the CI, but would think differently about the two results. >Sincerely, > >Howard Mann >University of Utah School of Medicine Paul Glasziou Department of Primary Health Care & Director, Centre for Evidence-Based Practice, Oxford ph: 44-1865-227055 ```

#### RSS Feeds and Sharing

JiscMail is a Jisc service.

View our service policies at https://www.jiscmail.ac.uk/policyandsecurity/ and Jisc's privacy policy at https://www.jisc.ac.uk/website/privacy-notice

For help and support help@jisc.ac.uk