Print

Print


Hi Ethan, thanks for pushing me to clarify--see below.

>> I hate to broach this subject again due to its wildly controversial
>> nature, but I was wondering whether there was any reference which
>> systematically analyses resolution cutoffs as a function of I/sig,
>> Rmerge, Rmeas, Rpim, etc. I strongly dislike Rmerge/Rcryst for
>> determining cutoffs, for obvious reasons--and especially for datasets
>> of higher multiplicity--but nevertheless it is a ubiquitously-reported
>> statistic, and one therefore has to make an argument against using it.
>
> What is your question, exactly?

The question is: "is there a reference in which Rmerge has been
thoroughly, clearly, and authoritatively discredited as a data
evaluation metric in the favor of Rmeas, Rpim, etc., and if so, what
is that reference?"

> I don't follow the logic that because a statistic is reported, one
> must therefore argue against it.

Let me say it clearer: when there is a conventional, standardized
method that one wants to abandon in favor of a better method, in
practice one has to make an argument for the new one and against the
old one. This is in contrast to continuing to use the conventional
method, which, even if apodictically surpassed by the newer method, de
facto needs no justification. So, in the current example, if you want
to use Rmeas or Rpim and not even report Rsym/merge, it will ruffle
feathers, even though the former is certainly superior.

Sorry for the confusion,

Jacob

*******************************************
Jacob Pearson Keller
Northwestern University
Medical Scientist Training Program
email: [log in to unmask]
*******************************************