Arturo Marti-Carvajal writes:
>Usually, the journals accepts the papers that show positive
>results. Now then, what would you do if your research,
>correctly planned and executed, shows negative results?
>In other words, which would the strategies be to achieve
>the publication of the research? What would it recommend?
You need to be careful here. There is indeed a lot of documentation to show
that positive research is more likely to be published (and possibly to be
published sooner, and even to be published more than once!). Nevertheless,
it is unclear whether this is caused in whole or in part by the actions of
journal editors. There are some hints that the researchers themselves may
have a tendency to submit papers for publication based on whether the
results are positive or negative. In other words, journal authors may
self-censor their negative findings. I'm sorry that I can't provide a
reference for this.
Also please keep in mind that terms like "negative results" are simplistic,
subjective, and ambiguous. There is good evidence, for example, that two
people reading the same paper can often come up with different opinions
about whether that study is positive or negative.
But enough of the caveats. As far as what strategies you as a journal author
should use, most of the things you should do are similar whether your study
is negative or positive. A good reference book is:
Lang, T.A. and Secic, M. (1997) How to Report Statistics in Medicine.
Annotated Guidelines for Authors, Editors, and Reviewers, Philadelphia, PA:
American College of Physicians.
Still, there are things you should pay special attention to when writing up
results from a negative study. These are also things that READERS of
negative studies should look for.
1. A power or sample size calculation. Devote a paragraph to this as it is
considered a critical component of any well designed research study. This
calculation is best done a priori (prior to the collection of data). If you
only calculate power post hoc (after the data is collected), make sure that
the effect size used in that calculation is based on what is considered a
clinically relevant difference, and is not based on the difference that was
observed in your study.
Post hoc power calculations that use the differences observed in the study
are useless, because they tell you nothing more than what your p-value
already told you. If you have a large p-value, then the post hoc power at
the observed difference is always very low. If you have a small p-value,
then it is always very high.
2. Confidence intervals. The width of a confidence interval provides
especially valuable information for a negative study. If the interval is so
narrow that it excludes any clinically relevant difference, then your
negative results have a lot of credibility.
If instead the confidence interval is wide enough to drive a truck through,
then you have shown that maybe the negative findings could be real or maybe
they could caused by an inadequate sample size. This is a very unhappy
situation, because it means that we will never know for sure why the study
was negative.
An example of a very wide confidence interval appears in a 1995 study of
homoeopathic treatment of pain and swelling after oral surgery (I don't have
the reference readily available). When the authors examined swelling 3 days
after the operation, they showed that homoeopathy led to 1 mm less swelling
on average, which was not statistically significant. The 95% confidence
interval ranged from -5.5 to 7.5 mm. In this context, I suspect a 13 mm wide
interval is quite large (though one cannot quite drive a truck through it).
This implies that neither a large improvement due to homoeopathy nor a large
decrement could be ruled out.
I talk a bit about this in my "How to Read a Medical Journal Article" web
presentation, though I hope to update and add information about this
important issue when I next update things.
Steve Simon, [log in to unmask], Standard Disclaimer.
How to Read a Medical Journal Article: http://www.cmh.edu/stats/journal.htm
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|