JiscMail Logo
Email discussion lists for the UK Education and Research communities

Help for CCP4BB Archives


CCP4BB Archives

CCP4BB Archives


CCP4BB@JISCMAIL.AC.UK


View:

Message:

[

First

|

Previous

|

Next

|

Last

]

By Topic:

[

First

|

Previous

|

Next

|

Last

]

By Author:

[

First

|

Previous

|

Next

|

Last

]

Font:

Monospaced Font

LISTSERV Archives

LISTSERV Archives

CCP4BB Home

CCP4BB Home

CCP4BB  June 2015

CCP4BB June 2015

Options

Subscribe or Unsubscribe

Subscribe or Unsubscribe

Log In

Log In

Get Password

Get Password

Subject:

Re: How many is too many free reflections?

From:

dusan turk <[log in to unmask]>

Reply-To:

dusan turk <[log in to unmask]>

Date:

Tue, 16 Jun 2015 15:07:02 +0200

Content-Type:

text/plain

Parts/Attachments:

Parts/Attachments

text/plain (231 lines)

Dear Axel and Paul,

Thank you for reopening the Rfree and TEST set discussion. The concept of Rfree and TEST set play an important role crystallography. When you introduced them back in 1992, Rfree was the first systematic method of structure validation. Its advantage is that it can use data from the structure being determined in the absence of any other data sources. Nowadays, over two decades after, we have learned a lot about the structures. Real space approaches from density fit, over deviation from ideal, statistically derived geometrical restraints to packing information together provide insight in the structure correctness, which in my opinion ensure the structure correctness and over interpretation. Not to mention the relevance of the fit structure factors of model (Fmodel) to the measured data (Fobs) by R-factor (R-work).

While the concept of the TEST set und its use in refinement provided a simple criterion for structure validation, it raises the following concerns:

- Refining structures against incomplete data results in structures which are off the true target. Namely, the information of reflections omitted from the WORK set is introducing bias of their absence. This bias is direct consequence of orthogonality of the Fourier series terms. The bias of absence is diminished by reducing the amount of data included in the TEST set, but nevertheless remains with its presence. In time the portion and size of the TEST was diminished substantially.

- The identification of TEST reflections faces the problem of their independency, when identical subunits present in a structure are related by NCS. I think that a substantial proportion of structures contains NCS. An interesting angle on NCS issue is provided by the work of Silva & Rossmann (1985), who discarded most of data almost proportionally to the level of NCS redundancy (using 1/7th for WORK set and 6/7 for TEST set in the case of 10-fold NCS).

- An additional, so far almost neglected concern is the cross propagation of systematic errors in structures. They are a consequence of interactions of structural parts through the chemical bonding and nonbonding energy terms used in refinement. Absence of consideration of errors of this origin results in too small coordinate error estimates essential for the Maximum Likelihood (ML) function.

- The original use of the TEST set in refinement used the Least Square target, apart from the bias of its absence, does not effect the Least Square target itself, whereas the standard ML function relies on this data and is therefore biased by them.

- The Rfree is an indicator of structure correctness and is monitored during refinement to assure its decrease, however a different choice of TEST set will result in a different phase error and gap between Rfree and Rwork. The relationship of the Rfree and Rwork gap and the phase error between different tests sets calculated on our 5 test cases with 4 different portions of 31 different TEST sets is either statistically significant or insignificant. Both groups are contain approximately equal number of members. When the relationship turned statistically significant it happened that the lower Rfree Rwork gap quite oftne deliver higher phase error. (This part of analysis was not included in the paper, however the negative correlation may be seen in the trend of the orange dots in several graphs on Figure 6.) Hence, there is no warranty that the TEST set with the lowest gap between Rfree and Rwork will deliver also the structure with the lowest phase error, which is an underlying assumption of the use of Rfree for the purpose of structure validation. This suggests that the gap between Rfree and Rwork can be easily manipulated and manipulation not spotted. In the absence of the reference structure it is namely impossible to discover which choice of the TEST set and the corresponding gap between Rfree and Rwork delivers the structure with the lowest phase error. (This argument in way provides support for the Gerard's point that the TEST set may not be exchanged when various structures of the same crystal form of a molecule are being determined using the Rfree methodology.) The “trick” of exchange of the TEST set is no surprise to the community which uses it at the occasions, when they suspect that a too large gap between R-free and R-work may lead to potential problems with a stubborn referee.

To overcome these concerns we developed the Maximum Likelihood Free Kick function (ML FK). As the cases used in the paper indicate, ML FK target function delivered more accurate structures and narrower solutions than the todays standard Maximum Likelihood Cross Validation (ML CV) function in all tested cases including the case of 2AHN structure build in the wrong direction.

Our understanding is that the role of Rfree should be considered from the historical perspective. In our paper we wrote “Regarding the use of Rfree to prevent overfitting, we looked back in time to the circumstances in which Rfree was introduced into refinement in 1992 (Brunger, 1992). In 1993, Brunger wrote that ‘published crystal structures show a large distribution of deviations from ideal geometry’ and that ‘the Engh & Huber parameters allow one to fit the model with surprisingly small deviations from ideal geometry’ (Brunger, 1993). The work of Engh & Huber (1991) introduced targets for bond and angle parameters derived from the crystal structures of small molecules in the Cambridge Structural Database (Allen et al., 1987). Nowadays, statistically derived parameters are routinely used in refinement. Moreover, noting the problem of structural quality, numerous validation tools have been developed and have become an unavoidable part of structure determination and deposition. In refinement the practice has been established that the deviations from ideal geometry are defined as a target used to scale crystallographic energy terms. Hence, the overfitting of models which leads to severe deviations from ideal geometry is no longer really possible.”

Regarding the part of our text you use as an argument to support your view, it appears that you cut it out of the context. The quoted text namely continues with the following “However, using the ML FK approach the size of the test set does not matter. It can be as small as 1% of the data or likely even less and the message about a fundamental problem with the structure solution will still be provided. Once it has been established that the structure solution is correct, the test part of the data can be merged with the work part to deliver a structure of higher accuracy. We wish to add that an experienced crystallographer would realise that the structure was built in the wrong direction owing to numerous mismatches of the model and the electron-density maps and inconsistency of the three-dimensional fold with the sequence, and that other validation warnings were also disregarded.”

Therefore I think that the conclusion from our paper still stands:

“To conclude, our understanding is that in the early 1990s in the absence of rigorous geometric restraints structure validation was first introduced in reciprocal space with Rfree. Nowadays, however, overfitting can be controlled in real space by the rigorous use of geometric restraints and validation tools. ... Since the ML FK approach allows the use of all data in refinement with a gain in structure accuracy and thereby delivers lower model bias, this work encourages the use of all data in the refinement of macromolecular structures.”

We believe that the Free Kick ML target delivered progress in refinement and as mentioned in the final paragraph of our paper "we anticipate further improvements and simplifications in the future". As for validation of refinement against all data, we believe that Rkick could be used instead of Rfree. Rkick is Rfactor of the kicked model used for calculation of phase and coordinate error estimates.

best regards,

dusan and jure




> On Jun 10, 2015, at 10:31 AM, Axel Brunger <[log in to unmask]> wrote:
>
> Dear Dusan,
>
> Following up on Gerard's comment, we also read your nice paper with great interest. Your method appears most useful for cases with a limited number of reflections (e.g., small unit cell and/or low resolution) resulting in 5% test sets with less than 1000 reflections in total. It improves the performance of your implementation of ML refinement for the cases that you described. However, we don't think that you can conclude that cross-validation is not needed anymore. To quote your paper, in the Discussion section:
>
> "To address the use of R free as indicator of wrong structures, we repeated the Kleywegt and Jones experiment (Kleywegt & Jones, 1995; Kleywegt & Jones, 1997) and built the 2ahn structure in the reverse direction and then refined it in the absence of solvent using the ML CV and ML FK approaches. Fig. 9 shows that Rfree stayed around 50% and Rfree–Rwork around 15% in the case of the reverse structure regardless of the ML approach and the fraction of data used in the test set. These values indicate that there is a fundamental problem with the structure, which supports the further use of Rfree as an indicator."
>
> Thank you for reaffirming the utility of the statistical tool of cross-validation. The reverse chain trace of 2ahn is admittedly an extreme case of misfitting, and would probably be detected with other validation tools as well these days. However, the danger of overfitting or misfitting is still a very real possibility for large structures, especially when only moderate to low resolution data are available, even with today's tools.
>
> Cross-validation can help even at very low resolution: in Structure 20, 957-966 (2012) we showed that cross-validation is useful for certain low resolution refinements where additional restraints (DEN restraints in that case) are used to reduce overfitting and obtain a more accurate structure. Cross-validation made it possible to detect overfitting of the data when no DEN restraints were used. We believe this should also apply when other types of restraints are used (e.g., reference model restraints in phenix.refine, REFMAC, or BUSTER).
>
> In summary, we believe that cross-validation remains an important (and conceptually simple) method to detect overfitting and for overall structure validation.
>
> Axel
>
> Axel T. Brunger
> Professor and Chair, Department of Molecular and Cellular Physiology
> Investigator, HHMI
> Email: [log in to unmask]
> Phone: 650-736-1031
> Web: http://atbweb.stanford.edu
>
> Paul
>
> Paul Adams
> Deputy Division Director, Physical Biosciences Division, Lawrence Berkeley Lab
> Division Deputy for Biosciences, Advanced Light Source, Lawrence Berkeley Lab
> Adjunct Professor, Department of Bioengineering, U.C. Berkeley
> Vice President for Technology, the Joint BioEnergy Institute
> Laboratory Research Manager, ENIGMA Science Focus Area
>
> Tel: 1-510-486-4225, Fax: 1-510-486-5909
>
> http://cci.lbl.gov/paul
>> On Jun 5, 2015, at 2:18 AM, Gerard Bricogne <[log in to unmask]> wrote:
>>
>> Dear Dusan,
>>
>> This is a nice paper and an interestingly different approach to
>> avoiding bias and/or quantifying errors - and indeed there are all
>> kinds of possibilities if you have a particular structure on which you
>> are prepared to spend unlimited time and resources.
>>
>> The specific context in which Graeme's initial question led me to
>> query instead "who should set the FreeR flags, at what stage and on
>> what basis?" was that of the data analysis linked to high-throughput
>> fragment screening, in which speed is of the essence at every step.
>>
>> Creating FreeR flags afresh for each target-fragment complex
>> dataset without any reference to those used in the refinement of the
>> apo structure is by no means an irrecoverable error, but it will take
>> extra computing time to let the refinement of the complex adjust to a
>> new free set, starting from a model refined with the ignored one. It
>> is in order to avoid the need for that extra time, or for a recourse
>> to various debiasing methods, that the "book-keeping faff" described
>> yesterday has been introduced. Operating without it is perfectly
>> feasible, it is just likely to not be optimally direct.
>>
>> I will probably bow out here, before someone asks "How many
>> [e-mails from me] is too many?" :-) .
>>
>>
>> With best wishes,
>>
>> Gerard.
>>
>> --
>> On Fri, Jun 05, 2015 at 09:14:18AM +0200, dusan turk wrote:
>>> Graeme,
>>> one more suggestion. You can avoid all the recipes by use all data for WORK set and 0 reflections for TEST set regardless of the amount of data by using the FREE KICK ML target. For explanation see our recent paper Praznikar, J. & Turk, D. (2014) Free kick instead of cross-validation in maximum-likelihood refinement of macromolecular crystal structures. Acta Cryst. D70, 3124-3134.
>>>
>>> Link to the paper you can find at “http://www-bmb.ijs.si/doc/references.HTML”
>>>
>>> best,
>>> dusan
>>>
>>>
>>>
>>>> On Jun 5, 2015, at 1:03 AM, CCP4BB automatic digest system <[log in to unmask]> wrote:
>>>>
>>>> Date: Thu, 4 Jun 2015 08:30:57 +0000
>>>> From: Graeme Winter <[log in to unmask]>
>>>> Subject: Re: How many is too many free reflections?
>>>>
>>>> Hi Folks,
>>>>
>>>> Many thanks for all of your comments - in keeping with the spirit of the BB
>>>> I have digested the responses below. Interestingly I suspect that the
>>>> responses to this question indicate the very wide range of resolution
>>>> limits of the data people work with!
>>>>
>>>> Best wishes Graeme
>>>>
>>>> ===================================
>>>>
>>>> Proposal 1:
>>>>
>>>> 10% reflections, max 2000
>>>>
>>>> Proposal 2: from wiki:
>>>>
>>>> http://strucbio.biologie.uni-konstanz.de/ccp4wiki/index.php/Test_set
>>>>
>>>> including Randy Read "recipe":
>>>>
>>>> So here's the recipe I would use, for what it's worth:
>>>> <10000 reflections: set aside 10%
>>>> 10000-20000 reflections: set aside 1000 reflections
>>>> 20000-40000 reflections: set aside 5%
>>>>> 40000 reflections: set aside 2000 reflections
>>>>
>>>> Proposal 3:
>>>>
>>>> 5% maximum 2-5k
>>>>
>>>> Proposal 4:
>>>>
>>>> 3% minimum 1000
>>>>
>>>> Proposal 5:
>>>>
>>>> 5-10% of reflections, minimum 1000
>>>>
>>>> Proposal 6:
>>>>
>>>>> 50 reflections per "bin" in order to get reliable ML parameter
>>>> estimation, ideally around 150 / bin.
>>>>
>>>> Proposal 7:
>>>>
>>>> If lots of reflections (i.e. 800K unique) around 1% selected - 5% would be
>>>> 40k i.e. rather a lot. Referees question use of > 5k reflections as test
>>>> set.
>>>>
>>>> Comment 1 in response to this:
>>>>
>>>> Surely absolute # of test reflections is not relevant, percentage is.
>>>>
>>>> ============================
>>>>
>>>> Approximate consensus (i.e. what I will look at doing in xia2) - probably
>>>> follow Randy Read recipe from ccp4wiki as this seems to (probably) satisfy
>>>> most of the criteria raised by everyone else.
>>>>
>>>>
>>>>
>>>> On Tue, Jun 2, 2015 at 11:26 AM Graeme Winter <[log in to unmask]>
>>>> wrote:
>>>>
>>>>> Hi Folks
>>>>>
>>>>> Had a vague comment handed my way that "xia2 assigns too many free
>>>>> reflections" - I have a feeling that by default it makes a free set of 5%
>>>>> which was OK back in the day (like I/sig(I) = 2 was OK) but maybe seems
>>>>> excessive now.
>>>>>
>>>>> This was particularly in the case of high resolution data where you have a
>>>>> lot of reflections, so 5% could be several thousand which would be more
>>>>> than you need to just check Rfree seems OK.
>>>>>
>>>>> Since I really don't know what is the right # reflections to assign to a
>>>>> free set thought I would ask here - what do you think? Essentially I need
>>>>> to assign a minimum %age or minimum # - the lower of the two presumably?
>>>>>
>>>>> Any comments welcome!
>>>>>
>>>>> Thanks & best wishes Graeme
>>>>>
>>>>
>>>
>>> Dr. Dusan Turk, Prof.
>>> Head of Structural Biology Group http://bio.ijs.si/sbl/
>>> Head of Centre for Protein and Structure Production
>>> Centre of excellence for Integrated Approaches in Chemistry and Biology of Proteins, Scientific Director
>>> http://www.cipkebip.org/
>>> Professor of Structural Biology at IPS "Jozef Stefan"
>>> e-mail: [log in to unmask]
>>> phone: +386 1 477 3857 Dept. of Biochem.& Mol.& Struct. Biol.
>>> fax: +386 1 477 3984 Jozef Stefan Institute
>>> Jamova 39, 1 000 Ljubljana,Slovenia
>>> Skype: dusan.turk (voice over internet: www.skype.com
>

Dr. Dusan Turk, Prof.
Head of Structural Biology Group http://bio.ijs.si/sbl/
Head of Centre for Protein and Structure Production
Centre of excellence for Integrated Approaches in Chemistry and Biology of Proteins, Scientific Director
http://www.cipkebip.org/
Professor of Structural Biology at IPS "Jozef Stefan"
e-mail: [log in to unmask]
phone: +386 1 477 3857 Dept. of Biochem.& Mol.& Struct. Biol.
fax: +386 1 477 3984 Jozef Stefan Institute
                            Jamova 39, 1 000 Ljubljana,Slovenia
Skype: dusan.turk (voice over internet: www.skype.com

Top of Message | Previous Page | Permalink

JiscMail Tools


RSS Feeds and Sharing


Advanced Options


Archives

April 2024
March 2024
February 2024
January 2024
December 2023
November 2023
October 2023
September 2023
August 2023
July 2023
June 2023
May 2023
April 2023
March 2023
February 2023
January 2023
December 2022
November 2022
October 2022
September 2022
August 2022
July 2022
June 2022
May 2022
April 2022
March 2022
February 2022
January 2022
December 2021
November 2021
October 2021
September 2021
August 2021
July 2021
June 2021
May 2021
April 2021
March 2021
February 2021
January 2021
December 2020
November 2020
October 2020
September 2020
August 2020
July 2020
June 2020
May 2020
April 2020
March 2020
February 2020
January 2020
December 2019
November 2019
October 2019
September 2019
August 2019
July 2019
June 2019
May 2019
April 2019
March 2019
February 2019
January 2019
December 2018
November 2018
October 2018
September 2018
August 2018
July 2018
June 2018
May 2018
April 2018
March 2018
February 2018
January 2018
December 2017
November 2017
October 2017
September 2017
August 2017
July 2017
June 2017
May 2017
April 2017
March 2017
February 2017
January 2017
December 2016
November 2016
October 2016
September 2016
August 2016
July 2016
June 2016
May 2016
April 2016
March 2016
February 2016
January 2016
December 2015
November 2015
October 2015
September 2015
August 2015
July 2015
June 2015
May 2015
April 2015
March 2015
February 2015
January 2015
December 2014
November 2014
October 2014
September 2014
August 2014
July 2014
June 2014
May 2014
April 2014
March 2014
February 2014
January 2014
December 2013
November 2013
October 2013
September 2013
August 2013
July 2013
June 2013
May 2013
April 2013
March 2013
February 2013
January 2013
December 2012
November 2012
October 2012
September 2012
August 2012
July 2012
June 2012
May 2012
April 2012
March 2012
February 2012
January 2012
December 2011
November 2011
October 2011
September 2011
August 2011
July 2011
June 2011
May 2011
April 2011
March 2011
February 2011
January 2011
December 2010
November 2010
October 2010
September 2010
August 2010
July 2010
June 2010
May 2010
April 2010
March 2010
February 2010
January 2010
December 2009
November 2009
October 2009
September 2009
August 2009
July 2009
June 2009
May 2009
April 2009
March 2009
February 2009
January 2009
December 2008
November 2008
October 2008
September 2008
August 2008
July 2008
June 2008
May 2008
April 2008
March 2008
February 2008
January 2008
December 2007
November 2007
October 2007
September 2007
August 2007
July 2007
June 2007
May 2007
April 2007
March 2007
February 2007
January 2007


JiscMail is a Jisc service.

View our service policies at https://www.jiscmail.ac.uk/policyandsecurity/ and Jisc's privacy policy at https://www.jisc.ac.uk/website/privacy-notice

For help and support help@jisc.ac.uk

Secured by F-Secure Anti-Virus CataList Email List Search Powered by the LISTSERV Email List Manager