Print

Print


On Tuesday, December 06, 2011 11:43:05 am Jacob Keller wrote:
> Hi Ethan, thanks for pushing me to clarify--see below.
> 
> >> I hate to broach this subject again due to its wildly controversial
> >> nature, but I was wondering whether there was any reference which
> >> systematically analyses resolution cutoffs as a function of I/sig,
> >> Rmerge, Rmeas, Rpim, etc. I strongly dislike Rmerge/Rcryst for
> >> determining cutoffs, for obvious reasons--and especially for datasets
> >> of higher multiplicity--but nevertheless it is a ubiquitously-reported
> >> statistic, and one therefore has to make an argument against using it.
> >
> > What is your question, exactly?
> 
> The question is: "is there a reference in which Rmerge has been
> thoroughly, clearly, and authoritatively discredited as a data
> evaluation metric in the favor of Rmeas, Rpim, etc., and if so, what
> is that reference?"

Why assume that any of those are a valid criterion for discarding data?

I would argue that a better approach is to ask whether the data measured
in the highest resolution shell is contributing positively to the map
quality. The R_{whatever} may be an imperfect predictor of that, but is
not by itself the property of interest.
 
In other words, there are two separate issues in play here:

1) Is there a "best" measure of data quality in the abstract
   (i.e. it can be calculated before you solve the structure or
   calculate a map)?

2) Is there a standard statistic to choose what data is used for
   refinement?

If you just want to argue which R_{whatever} best serves to address 
the first issue, carry on.

If you are worried about the second issue, IMHO none of these 
quantities are appropriate.  They address entirely the wrong question.
We all know that good data does not guarantee a good model, and noisy
data may nevertheless yield a valid model. So you need a better reason
to discard data than "it's noisy".

	Ethan


> > I don't follow the logic that because a statistic is reported, one
> > must therefore argue against it.
> 
> Let me say it clearer: when there is a conventional, standardized
> method that one wants to abandon in favor of a better method, in
> practice one has to make an argument for the new one and against the
> old one. This is in contrast to continuing to use the conventional
> method, which, even if apodictically surpassed by the newer method, de
> facto needs no justification. So, in the current example, if you want
> to use Rmeas or Rpim and not even report Rsym/merge, it will ruffle
> feathers, even though the former is certainly superior.

 
> Sorry for the confusion,
> 
> Jacob
> 
> *******************************************
> Jacob Pearson Keller
> Northwestern University
> Medical Scientist Training Program
> email: [log in to unmask]
> *******************************************
> 

-- 
Ethan A Merritt
Biomolecular Structure Center,  K-428 Health Sciences Bldg
University of Washington, Seattle 98195-7742