JiscMail Logo
Email discussion lists for the UK Education and Research communities

Help for CCP4BB Archives


CCP4BB Archives

CCP4BB Archives


CCP4BB@JISCMAIL.AC.UK


View:

Message:

[

First

|

Previous

|

Next

|

Last

]

By Topic:

[

First

|

Previous

|

Next

|

Last

]

By Author:

[

First

|

Previous

|

Next

|

Last

]

Font:

Proportional Font

LISTSERV Archives

LISTSERV Archives

CCP4BB Home

CCP4BB Home

CCP4BB  June 2012

CCP4BB June 2012

Options

Subscribe or Unsubscribe

Subscribe or Unsubscribe

Log In

Log In

Get Password

Get Password

Subject:

Re: Death of Rmerge

From:

aaleshin <[log in to unmask]>

Reply-To:

aaleshin <[log in to unmask]>

Date:

Fri, 1 Jun 2012 10:59:51 -0700

Content-Type:

text/plain

Parts/Attachments:

Parts/Attachments

text/plain (66 lines)

Please excuse my ignorance, but I cannot understand why Rmerge is unreliable for estimation of the resolution?
I mean, from a theoretical point of view, <1/sigma> is indeed a better criterion, but it is not obvious from a practical point of view.

<1/sigma> depends on a method for sigma estimation, and so same data processed by different programs may have different <1/sigma>. Moreover, HKL2000 allows users to adjust sigmas manually. Rmerge estimates sigmas from differences between measurements of same structural factor, and hence is independent of our preferences.  But, it also has a very important ability to validate consistency of the merged data. If my crystal changed during the data collection, or something went wrong with the diffractometer, Rmerge will show it immediately, but <1/sigma>  will not.

So, please explain why should we stop using Rmerge as a criterion of data resolution? 

Alex
Sanford-Burnham Medical Research Institute
10901 North Torrey Pines Road
La Jolla, California 92037



On Jun 1, 2012, at 5:07 AM, Ian Tickle wrote:

> On 1 June 2012 03:22, Edward A. Berry <[log in to unmask]> wrote:
>> Leo will probably answer better than I can, but I would say I/SigI counts
>> only
>> the present reflection, so eliminating noise by anisotropic truncation
>> should
>> improve it, raising the average I/SigI in the last shell.
> 
> We always include unmeasured reflections with I/sigma(I) = 0 in the
> calculation of the mean I/sigma(I) (i.e. we divide the sum of
> I/sigma(I) for measureds by the predicted total no of reflections incl
> unmeasureds), since for unmeasureds I is (almost) completely unknown
> and therefore sigma(I) is effectively infinite (or at least finite but
> large since you do have some idea of what range I must fall in).  A
> shell with <I/sigma(I)> = 2 and 50% completeness clearly doesn't carry
> the same information content as one with the same <I/sigma(I)> and
> 100% complete; therefore IMO it's very misleading to quote
> <I/sigma(I)> including only the measured reflections.  This also means
> we can use a single cut-off criterion (we use mean I/sigma(I) > 1),
> and we don't need another arbitrary cut-off criterion for
> completeness.  As many others seem to be doing now, we don't use
> Rmerge, Rpim etc as criteria to estimate resolution, they're just too
> unreliable - Rmerge is indeed dead and buried!
> 
> Actually a mean value of I/sigma(I) of 2 is highly statistically
> significant, i.e. very unlikely to have arisen by chance variations,
> and the significance threshold for the mean must be much closer to 1
> than to 2.  Taking an average always increases the statistical
> significance, therefore it's not valid to compare an _average_ value
> of I/sigma(I) = 2 with a _single_ value of I/sigma(I) = 3 (taking 3
> sigma as the threshold of statistical significance of an individual
> measurement): that's a case of "comparing apples with pears".  In
> other words in the outer shell you would need a lot of highly
> significant individual values >> 3 to attain an overall average of 2
> since the majority of individual values will be < 1.
> 
>> F/sigF is expected to be better than I/sigI because dx^2 = 2Xdx,
>> dx^2/x^2 = 2dx/x, dI/I = 2* dF/F  (or approaches that in the limit . . .)
> 
> That depends on what you mean by 'better': every metric must be
> compared with a criterion appropriate to that metric. So if we are
> comparing I/sigma(I) with a criterion value = 3, then we must compare
> F/sigma(F) with criterion value = 6 ('in the limit' of zero I), in
> which case the comparison is no 'better' (in terms of information
> content) with I than with F: they are entirely equivalent.  It's
> meaningless to compare F/sigma(F) with the criterion value appropriate
> to I/sigma(I): again that's "comparing apples and pears"!
> 
> Cheers
> 
> -- Ian

Top of Message | Previous Page | Permalink

JISCMail Tools


RSS Feeds and Sharing


Advanced Options


Archives

April 2014
March 2014
February 2014
January 2014
December 2013
November 2013
October 2013
September 2013
August 2013
July 2013
June 2013
May 2013
April 2013
March 2013
February 2013
January 2013
December 2012
November 2012
October 2012
September 2012
August 2012
July 2012
June 2012
May 2012
April 2012
March 2012
February 2012
January 2012
December 2011
November 2011
October 2011
September 2011
August 2011
July 2011
June 2011
May 2011
April 2011
March 2011
February 2011
January 2011
December 2010
November 2010
October 2010
September 2010
August 2010
July 2010
June 2010
May 2010
April 2010
March 2010
February 2010
January 2010
December 2009
November 2009
October 2009
September 2009
August 2009
July 2009
June 2009
May 2009
April 2009
March 2009
February 2009
January 2009
December 2008
November 2008
October 2008
September 2008
August 2008
July 2008
June 2008
May 2008
April 2008
March 2008
February 2008
January 2008
December 2007
November 2007
October 2007
September 2007
August 2007
July 2007
June 2007
May 2007
April 2007
March 2007
February 2007
January 2007


WWW.JISCMAIL.AC.UK

Secured by F-Secure Anti-Virus CataList Email List Search Powered by the LISTSERV Email List Manager