Print

Print


for those who remember/are interested in this issue from back in june.

first and foremost: let me thank the community, especially the almost 30 
individuals who replied, for all the support and expert advice sent to 
me. the opinion that no data should be thrown away was unanimous, as i 
kind of expected.

i wrote the rebuttal and just got word that the journal has accepted the 
manuscript for publication.

note to jamie and adam: next season, please declare the myth that the 
reviewer is always right busted. let the world know.


dr kardos laszlo wrote:
> dear list members,
>
> i would be grateful for opinion from the statistical community on an 
> issue i first thought trivial, but later... here it goes:
>
> we recruited n1 = 200 patients of a disease and n2 = 380 healthy 
> controls to compare them in terms of some outcome using t tests.
>
> as part of a publication process in a reputable journal which shall 
> remain unnamed, a reviewer complained that this setting is unbalanced; 
> n1 should be equal to n2.
>
> assuming that the reviewer wants to see the principle "50-50% split 
> gives greatest power" upheld, we explained in a rebuttal that n1 was 
> limited by factors beyond our control, while n2 was not, so the choice 
> was either to limit n2 (and the test's power) artificially to ensure 
> balance or to put allocated study resources to good use and recruit 
> more controls and, with them, extra power and precision for our analysis.
>
> they still, however, insist that balance is all crucial. clearly, we 
> cannot now (and could not have at design time) set n1 = n2 = 290. the 
> only way we could satisfy them would be by throwing away a random 180 
> extra controls and re-analyzing with n1 = n2 = 200.
>
> my key question: could the reviewer be right on this? are there any 
> circumstances under which the trade-off bottom line between a 
> full-balance, lower power and a broken-balance, higher power approach 
> favors the former, if these are the only two options? if not, are 
> there any literature sources (or word from high-up stats experts) 
> explicitly clarifying this issue, something we can refer to rather 
> than expect them to take our word for it?
>
>
> on a more general note, what is the current common wisdom on how to 
> handle disagreements with peer reviewers on strictly statistical 
> issues? i hear "the reviewer is always right" from time to time, but 
> then find myself feeling uncomfortable when this happens to go 
> directly counter even to the very basics of my med stats education.
>
> best regards,
>
> laszlo
>
>