John (and others?),
Please do not assume too much from my post. I am aware of and have positions about everything you mentioned in your post (and, BTW, I agree with what you said). I recently got into a debate about the specific issue in my post, with a relatively well known quantitative methods professor in psychology/education, and I wanted to see if any of you agreed with his position (and, if so, to hear the reasons). For many years I have been teaching Case 1, in conjunction with the qualifications you mentioned, and many more. I also thought it might be interesting to discuss the issues a little, again. For example, I would argue that it is important to have a starting point case (case 1, case 2, or a case 3), when teaching NHST in introductory statistics classes. Then, by adding and discussing the other related issues, we try to help them learn to conduct thoughtful/reasonable statistical practice. I believe that the social/behavioral/health sciences would have developed more quickly if effect sizes and effect confidence intervals had been used for the past 75 years. We still will get to where we want, but it is going to take longer. For example, the APA Publication Manual, which is used by multiple disciplines, has made some (perhaps not enough) improvements on statistical practice and reporting, but many journals are lagging behind.
Cheers,
Burke
>>> "John Sorkin" <[log in to unmask]> 1/7/2010 5:16 PM >>>
Burke,
In some sense you question is of little import. I don't mean to denigrate you, or your thought process by saying this, but we must remember that there is nothing "magical" about a p<0.05. There is no science behind the choice of 0.05 as indicating significance as compared to any other value. What is the difference between a p<0.04 which we say is significant and one that is <0.06 which we say is not significant? They both differ form the magical 0.05 by 0.01! The choice of 0.05 comes from R.A. Fisher who pulled the value out of the air. It is, I believe far better to give the effect size along with a measure its precision (i.e. the SE) and a p value, or perhaps better the effect and a 95% confidence interval around the effect without getting tied into knots determining what is statistically significant and what is not. It is all to easy to fall into the trap of saying that one will pay attention to a test associated with a p<0.05 (or <=0.05) and ignore results with any larger p value.
We must also remember statistical significance does not mean a result is important, and conversely.
John
John David Sorkin M.D., Ph.D.
Chief, Biostatistics and Informatics
University of Maryland School of Medicine Division of Gerontology
Baltimore VA Medical Center
10 North Greene Street
GRECC (BT/18/GR)
Baltimore, MD 21201-1524
(Phone) 410-605-7119
(Fax) 410-605-7913 (Please call phone number above prior to faxing)>>> Burke Johnson <[log in to unmask]> 1/7/2010 5:23 PM >>>
You may leave the list at any time by sending the command
SIGNOFF allstat
to [log in to unmask], leaving the subject line blank.
|