Print

Print


Original Query:
>>> "Peter Levy" <[log in to unmask]> 21/09/00 14:09:43 >>>
<<Is there a good reason why the standard error is commonly used for
error
bars instead of the 95% confidence interval, other than they are
roughly
half as big, so give a visual impression of higher confidence? Is it
simply
because they are independent of any particular p value?>>

Responses
----------------------------------
So far as I can tell, the use of standard error rests on nothing but
convention and inertia.  The standard error is simply the standard
deviation of the sampling distribution of sample means, and as such
represents approximately the 68% confidence interval.  I agree it
would
make much better sense to have error bars represent the 95%
interval.
Richard Lowry
----------------------------------
Well, for a start, SE bars are not really 'independent of any
particular
p-value' - for a normal distribution, they represent a '68.3%
confidence
interval'.  So, they are entirely analogous to CIs, just
quantitatively
different.
I think the simple answer to your question is 'no' - i.e. there is no
'good
reason', except for history and tradition - and, as you presumably
mean to
imply, there is every 'good reason' to actually go for 95% CIs
instead.
However, SE bars were being used long before most people were even
thinking
about the concept of confidence intervals - and that has stuck.
One of the most unfortunate thing, of course, is that one so often
hears
people attempting to draw conclusions based on
'overlapping'/'non-overlapping' SE bars.  As I'm sure you know, this
interpretation is not even as simple as some might think in relation
to 95%
CI's, and certainly is misleading in terms of SE bars.
Dr John Whittington
----------------------------------
Also, there is less dependence on the normality assumption.  Not no
dependence, but less.
Jay Warner
----------------------------------
If you look in various journals at the reporting of results you will
find that authors are not consistent in usage when it comes to 
"mean and error bars". Some plot the mean +/- the standard deviation
some plot the mean +/- the standard error. I put this usage down to 
statistical innumeracy, authors know they have to give some
indication
of the variability of their results and so they use this type of
display. I don't think that the majority even think about confidence
intervals or anything as sophisticated as your suggestions.
I have even seen "mean and error bars" given when each set of data 
consisted of only two or three measurements!
In the book "Medical Statistics on Personal Computers" R A Brown 
and J Swanson Beck BMJ Pulishing Group (2nd ed 1994) 
ISBN 0 7279 0771 9
we put this down (page 15) as a "display to be avoided".
Dick Brown
----------------------------------
Much of my work concerns experiments which are making a comparison
between
two groups, treated and control.  The appropriate confidence interval
is
for the difference between the means.  It is misleading to show 95%
confidence bars for the means of the two groups separately unless
the
absolute value of the mean for the group is the main focus of
interest and
not the difference between the groups.
In pharmacology, it is standard practise to plot the mean +/- sem for
the
individual groups.
Pharmacologists are familiar with data plotted in this way and
usually
interpret it correctly.
This is less often true if you show them mean and 95 ci for each
group
they become - indeed, in my experience they usually ask for mean+-
sem
instead.
I make a point of always indicating the sample size in the figure
caption
whenever I plot sem bars.
T R Auton
----------------------------------

Thanks to those who responded.


Peter Levy
Centre for Ecology and Hydrology
Bush Estate, Penicuik
Midlothian, EH26 0QB, UK
Tel: 0131 445 8556 (direct)
       0131 445 4343 (switchboard)
Fax: 0131 445 3943
E-mail: [log in to unmask]


%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%