JiscMail Logo
Email discussion lists for the UK Education and Research communities

Help for CCP4BB Archives


CCP4BB Archives

CCP4BB Archives


CCP4BB@JISCMAIL.AC.UK


View:

Message:

[

First

|

Previous

|

Next

|

Last

]

By Topic:

[

First

|

Previous

|

Next

|

Last

]

By Author:

[

First

|

Previous

|

Next

|

Last

]

Font:

Proportional Font

LISTSERV Archives

LISTSERV Archives

CCP4BB Home

CCP4BB Home

CCP4BB  January 2012

CCP4BB January 2012

Options

Subscribe or Unsubscribe

Subscribe or Unsubscribe

Log In

Log In

Get Password

Get Password

Subject:

Re: Reasoning for Rmeas or Rpim as Cutoff

From:

Ronald E Stenkamp <[log in to unmask]>

Reply-To:

Ronald E Stenkamp <[log in to unmask]>

Date:

Tue, 31 Jan 2012 10:06:41 -0800

Content-Type:

TEXT/PLAIN

Parts/Attachments:

Parts/Attachments

TEXT/PLAIN (134 lines)

James Holton suggested a reason why the "forefathers" used a 3-sigma cutoff.

I'll give another reason provided to me years ago by one of those guys, Lyle Jensen.  In the 70s we were interested in the effects of data-set thresholds on refinement (Acta Cryst., B31, 1507-1509 (1975)) so he explained his view of the history of "less-than" cutoffs for me.  It was a very Seattle-centric explanation.

In the 50s and 60s, Lyle collected intensity data using an integrating Weissenberg camera and a film densitometer.  Some reflections had intensities below the fog or background level of the film and were labeled "unobserved".  Sometimes they were used in refinement, but only if the calculated Fc values were above the "unobserved" value.

When diffractometers came along with their scintillation counters, there were measured quantities for each reflection (sometimes negative), and Lyle needed some way to compare structures refined with diffractometer data with those obtained using film methods.  Through some method he never explained, a value of 2-sigma(I) defining "less-thans" was deemed comparable to the "unobserved" criterion used for the earlier structures.  His justification for the 2-sigma cutoff was that it allowed him to understand the refinement behavior and R values of these data sets collected with newer technology.

I don't know who all contributed to the idea of a 2-sigma cutoff, nor whether there were theoretical arguments for it.  I suspect the idea of some type of cutoff was discussed at ACA meetings and other places.  And a 2-sigma cutoff might have sprung up independently in many labs.

I think the gradual shift to a 3-sigma cutoff was akin to "grade inflation".  If you could improve your R values with a 2-sigma cutoff, 3-sigma would probably be better.  So people tried it.  It might be interesting to figure out how that was brought under control.  I suspect a few troublesome structures and some persistent editors and referees gradually raised our group consciousness to avoid the use of 3-sigma cutoffs.

Ron

On Mon, 30 Jan 2012, James Holton wrote:

> Once upon a time, it was customary to apply a 3-sigma cutoff to each and 
> every spot observation, and I believe this was the era when the "~35% Rmerge 
> in the outermost bin" rule was conceived, alongside the "80% completeness" 
> rule.  Together, these actually do make a " reasonable" two-pronged criterion 
> for the resolution limit.
>
> Now, by "reasonable" I don't mean "true", just that there is "reasoning" 
> behind it.  If you are applying a 3-sigma cutoff to spots, then the expected 
> error per spot is not more than ~33%, so if Rmerge is much bigger than that, 
> then there is something "funny" going on.  Perhaps a violation of the chosen 
> space group symmetry (which may only show up at high resolution), radiation 
> damage, non-isomorphism, bad absorption corrections, crystal slippage or a 
> myriad of other "scaling problems" could do this.  Rmerge became a popular 
> statistic because it proved a good way of detecting problems like these in 
> data processing.  Fundamentally, if you have done the scaling properly, then 
> Rmerge/Rmeas should not be worse than the expected error of a single spot 
> measurement.  This is either the error expected from counting statistics (33% 
> if you are using a 3-sigma cutoff), or the calibration error of the 
> instrument (~5% on a bad day, ~2% on a good one), whichever is bigger.
>
> As for completeness, 80% overall is about the bare minimum of what you can 
> get away with before the map starts to change noticeably.  See my movie here:
> http://bl831.als.lbl.gov/~jamesh/movies/index.html#completeness
> so I imagine this "80% rule" just got extended to the outermost bin.  After 
> all, claiming a given resolution when you've only got 50% of the spots at 
> that resolution seems unwarranted, but requiring 100% completeness seems a 
> little too strict.
>
> Where did these rules come from?  As I recall, I first read about them in the 
> manual for the "PROCESS" program that came with our R-axis IIc x-ray system 
> when I was in graduate school (ca 1996).  This program was conveniently 
> integrated into the data collection software on the detector control 
> computers: one was running VMS, and the "new" one was an SGI.  I imagine a 
> few readers of this BB may have never heard of "PROCESS", but it is listed as 
> the "intensity integration software" for at least a thousand PDB entries.  Is 
> there a reference for "PROCESS"?  Yes.  In the literature it is almost always 
> cited with: (Molecular Structure Corporation, The Woodlands, TX).  Do I still 
> have a copy of the manual?  Uhh.  No.  In fact, the building that once 
> contained it has since been torn down.  Good thing I kept my images!
>
> Is this "35% Rmerge with a 3-sigma cutoff" method of determining the 
> resolution limit statistically valid?  Yes!  There are actually very sound 
> statistical reasons for it.  Is the resolution cutoff obtained the best one 
> for maximum-likelihood refinement?  Definitely not!  Modern refinement 
> programs do benefit from weak data, and tossing it all out messes up a number 
> of things.  Does including weak data make Rmerge/Rmeas/Rpim and R/Rfree go 
> up?  Yes.  Does this make them more "honest"?  No.  It actually makes them 
> less useful.
>
> Remember all R factors are measures of _relative_ error, so it is important 
> to remember to ask the question: "Relative to what?".  For Rmerge, the "what" 
> is the sum of all the spot intensities (Blundell and Johnson, 1976).  Where 
> you run into problems is when you restrict the Rmerge calculation to a single 
> resolution bin.  If the sum of all intensities in the bin is actually zero, 
> then Rmerge is undefined (dividing by zero).  If the signal-to-noise ratio is 
> ~1, then the Rmerge equation doesn't "blow up" mathematically, but it does 
> give essentially random results.  This is because Rmerge values for data this 
> weak take on a Cauchy distribution, and no matter how much averaging you do, 
> Cauchy-distributed values have a random mean.  You can see in the classic 
> Weiss & Hilgenfeld (1997) paper that they had to use "outlier rejection" with 
> their fake data to get Rmerge to behave even with a signal-to-noise ratio of 
> 2.  The "turn over point" where the Rmerge equation becomes mathematically 
> well-behaved (Gaussian rather than Cauchy distribution) is when the 
> signal-to-noise ratio is about 3.  I believe this is why our forefathers used 
> a 3-sigma cutoff.
>
> Now, a 3-sigma cutoff on the raw observation data may sound like heresy 
> today, and I do NOT recommend you feed such data to refinement or other 
> downstream programs.  But, it is important to remember what you are trying to 
> measure!  If you are trying to detect scaling errors, then you should be 
> looking at spots where scaling errors are not masked by other kinds of error. 
> For example, a spot with only one photon in it is not going to tell you very 
> much about the accuracy of your scales, but its average 
> |delta-intensity|/intensity is going to be huge.  That is, the pre-R-factor 
> sigma cutoff isolates the R factor calculation to spots dominated by scaling 
> errors.  Including weaker data with their Cauchy-distributed R factor simply 
> adds noise to the value of the R factor itself.
>
> So, I'd say if you have a reviewer complaining that your Rmerge in the 
> outermost bin is too high, simply tell the editor that you did not use a 
> 3-sigma cutoff on the raw data for the Rmerge calculation, and ask if he/she 
> would prefer that you did.
>
> -James Holton
> MAD Scientist
>
> On 1/27/2012 9:55 AM, Jacob Keller wrote:
>> Clarification: I did not mean I/sigma of 2 per se, I just meant
>> I/sigma is more directly a measure of signal than R values.
>> 
>> JPK
>> 
>> On Fri, Jan 27, 2012 at 11:47 AM, Jacob Keller
>> <[log in to unmask]>  wrote:
>>> Dear Crystallographers,
>>> 
>>> I cannot think why any of the various flavors of Rmerge/meas/pim
>>> should be used as a data cutoff and not simply I/sigma--can somebody
>>> make a good argument or point me to a good reference? My thinking is
>>> that signal:noise of>2 is definitely still signal, no matter what the
>>> R values are. Am I wrong? I was thinking also possibly the R value
>>> cutoff was a historical accident/expedient from when one tried to
>>> limit the amount of data in the face of limited computational
>>> power--true? So perhaps now, when the computers are so much more
>>> powerful, we have the luxury of including more weak data?
>>> 
>>> JPK
>>> 
>>> 
>>> --
>>> *******************************************
>>> Jacob Pearson Keller
>>> Northwestern University
>>> Medical Scientist Training Program
>>> email:[log in to unmask]
>>> *******************************************
>

Top of Message | Previous Page | Permalink

JiscMail Tools


RSS Feeds and Sharing


Advanced Options


Archives

May 2024
April 2024
March 2024
February 2024
January 2024
December 2023
November 2023
October 2023
September 2023
August 2023
July 2023
June 2023
May 2023
April 2023
March 2023
February 2023
January 2023
December 2022
November 2022
October 2022
September 2022
August 2022
July 2022
June 2022
May 2022
April 2022
March 2022
February 2022
January 2022
December 2021
November 2021
October 2021
September 2021
August 2021
July 2021
June 2021
May 2021
April 2021
March 2021
February 2021
January 2021
December 2020
November 2020
October 2020
September 2020
August 2020
July 2020
June 2020
May 2020
April 2020
March 2020
February 2020
January 2020
December 2019
November 2019
October 2019
September 2019
August 2019
July 2019
June 2019
May 2019
April 2019
March 2019
February 2019
January 2019
December 2018
November 2018
October 2018
September 2018
August 2018
July 2018
June 2018
May 2018
April 2018
March 2018
February 2018
January 2018
December 2017
November 2017
October 2017
September 2017
August 2017
July 2017
June 2017
May 2017
April 2017
March 2017
February 2017
January 2017
December 2016
November 2016
October 2016
September 2016
August 2016
July 2016
June 2016
May 2016
April 2016
March 2016
February 2016
January 2016
December 2015
November 2015
October 2015
September 2015
August 2015
July 2015
June 2015
May 2015
April 2015
March 2015
February 2015
January 2015
December 2014
November 2014
October 2014
September 2014
August 2014
July 2014
June 2014
May 2014
April 2014
March 2014
February 2014
January 2014
December 2013
November 2013
October 2013
September 2013
August 2013
July 2013
June 2013
May 2013
April 2013
March 2013
February 2013
January 2013
December 2012
November 2012
October 2012
September 2012
August 2012
July 2012
June 2012
May 2012
April 2012
March 2012
February 2012
January 2012
December 2011
November 2011
October 2011
September 2011
August 2011
July 2011
June 2011
May 2011
April 2011
March 2011
February 2011
January 2011
December 2010
November 2010
October 2010
September 2010
August 2010
July 2010
June 2010
May 2010
April 2010
March 2010
February 2010
January 2010
December 2009
November 2009
October 2009
September 2009
August 2009
July 2009
June 2009
May 2009
April 2009
March 2009
February 2009
January 2009
December 2008
November 2008
October 2008
September 2008
August 2008
July 2008
June 2008
May 2008
April 2008
March 2008
February 2008
January 2008
December 2007
November 2007
October 2007
September 2007
August 2007
July 2007
June 2007
May 2007
April 2007
March 2007
February 2007
January 2007


JiscMail is a Jisc service.

View our service policies at https://www.jiscmail.ac.uk/policyandsecurity/ and Jisc's privacy policy at https://www.jisc.ac.uk/website/privacy-notice

For help and support help@jisc.ac.uk

Secured by F-Secure Anti-Virus CataList Email List Search Powered by the LISTSERV Email List Manager