Print

Print


Dear Axel and Paul,

Thank you for reopening the Rfree and TEST set discussion. The concept of Rfree and TEST set play an important role crystallography. When you introduced them back in 1992, Rfree was the first systematic method of structure validation. Its advantage is that it can use data from the structure being determined in the absence of any other data sources. Nowadays, over two decades after, we have learned a lot about the structures. Real space approaches from density fit, over deviation from ideal, statistically derived geometrical restraints to packing information together provide insight in the structure correctness, which in my opinion ensure the structure correctness and over interpretation. Not to mention the relevance of the fit structure factors of model (Fmodel) to the measured data (Fobs) by R-factor (R-work).

While the concept of the TEST set und its use in refinement provided a simple criterion for structure validation, it raises the following concerns:

- Refining structures against incomplete data results in structures which are off the true target. Namely, the information of reflections omitted from the WORK set is introducing bias of their absence. This bias is direct consequence of orthogonality of the Fourier series terms. The bias of absence is diminished by reducing the amount of data included in the TEST set, but nevertheless remains with its presence. In time the portion and size of the TEST was diminished substantially.

- The identification of TEST reflections faces the problem of their independency, when identical subunits present in a structure are related by NCS. I think that a substantial proportion of structures contains NCS. An interesting angle on NCS issue is provided by the work of Silva & Rossmann (1985), who discarded most of data almost proportionally to the level of NCS redundancy (using 1/7th for WORK set and 6/7 for TEST set in the case of 10-fold NCS).

- An additional, so far almost neglected concern is the cross propagation of systematic errors in structures. They are a consequence of interactions of structural parts through the chemical bonding and nonbonding energy terms used in refinement. Absence of consideration of errors of this origin results in too small coordinate error estimates essential for the Maximum Likelihood (ML) function.

- The original use of the TEST set in refinement used the Least Square target, apart from the bias of its absence, does not effect the Least Square target itself, whereas the standard ML function relies on this data and is therefore biased by them.

- The Rfree is an indicator of structure correctness and is monitored during refinement to assure its decrease, however a different choice of TEST set will result in a different phase error and gap between Rfree and Rwork. The relationship of the Rfree and Rwork gap and the phase error between different tests sets calculated on our 5 test cases with 4 different portions of 31 different TEST sets is either statistically significant or insignificant. Both groups are contain approximately equal number of members. When the relationship turned statistically significant it happened that the lower Rfree Rwork gap quite oftne deliver higher phase error. (This part of analysis was not included in the paper, however the negative correlation may be seen in the trend of the orange dots in several graphs on Figure 6.) Hence, there is no warranty that the TEST set with the lowest gap between Rfree and Rwork will deliver also the structure with the lowest phase error, which is an underlying assumption of the use of Rfree for the purpose of structure validation. This suggests that the gap between Rfree and Rwork can be easily manipulated and manipulation not spotted.  In the absence of the reference structure it is namely impossible to discover which choice of the TEST set and the corresponding gap between Rfree and Rwork delivers the structure with the lowest phase error. (This argument in way provides support for the Gerard's point that the TEST set may not be exchanged when various structures of the same crystal form of a molecule are being determined using the Rfree methodology.) The “trick” of exchange of the TEST set is no surprise to the community which uses it at the occasions, when they suspect that a too large gap between R-free and R-work may lead to potential problems with a stubborn referee.

To overcome these concerns we developed the Maximum Likelihood Free Kick function (ML FK). As the cases used in the paper indicate, ML FK target function delivered more accurate structures and narrower solutions than the todays standard Maximum Likelihood Cross Validation (ML CV) function in all tested cases including the case of 2AHN structure build in the wrong direction.

Our understanding is that the role of Rfree should be considered from the historical perspective. In our paper we wrote “Regarding the use of Rfree to prevent overfitting, we looked back in time to the circumstances in which Rfree was introduced into refinement in 1992 (Brunger, 1992). In 1993, Brunger wrote that ‘published crystal structures show a large distribution of deviations from ideal geometry’ and that ‘the Engh & Huber parameters allow one to fit the model with surprisingly small deviations from ideal geometry’ (Brunger, 1993). The work of Engh & Huber (1991) introduced targets for bond and angle parameters derived from the crystal structures of small molecules in the Cambridge Structural Database (Allen et al., 1987). Nowadays, statistically derived parameters are routinely used in refinement. Moreover, noting the problem of structural quality, numerous validation tools have been developed and have become an unavoidable part of structure determination and deposition. In refinement the practice has been established that the deviations from ideal geometry are defined as a target used to scale crystallographic energy terms. Hence, the overfitting of models which leads to severe deviations from ideal geometry is no longer really possible.”

Regarding the part of our text you use as an argument to support your view, it appears that you cut it out of the context. The quoted text namely continues with the following “However, using the ML FK approach the size of the test set does not matter. It can be as small as 1% of the data or likely even less and the message about a fundamental problem with the structure solution will still be provided. Once it has been established that the structure solution is correct, the test part of the data can be merged with the work part to deliver a structure of higher accuracy. We wish to add that an experienced crystallographer would realise that the structure was built in the wrong direction owing to numerous mismatches of the model and the electron-density maps and inconsistency of the three-dimensional fold with the sequence, and that other validation warnings were also disregarded.”

Therefore I think that the conclusion from our paper still stands:

“To conclude, our understanding is that in the early 1990s in the absence of rigorous geometric restraints structure validation was first introduced in reciprocal space with Rfree. Nowadays, however, overfitting can be controlled in real space by the rigorous use of geometric restraints and validation tools. ... Since the ML FK approach allows the use of all data in refinement with a gain in structure accuracy and thereby delivers lower model bias, this work encourages the use of all data in the refinement of macromolecular structures.”

We believe that the Free Kick ML target delivered progress in refinement and as mentioned in the final paragraph of our paper "we anticipate further improvements and simplifications in the future". As for validation of refinement against all data, we believe that Rkick could be used instead of Rfree. Rkick is Rfactor of the kicked model used for calculation of phase and coordinate error estimates.

best regards,

dusan and jure




> On Jun 10, 2015, at 10:31 AM, Axel Brunger <[log in to unmask]> wrote:
> 
> Dear Dusan,
> 
> Following up on Gerard's comment, we also read your nice paper with great interest. Your method appears most useful for cases with a limited number of reflections (e.g., small unit cell and/or low resolution) resulting in 5% test sets with less than 1000 reflections in total. It improves the performance of your implementation of ML refinement for the cases that you described. However, we don't think that you can conclude that cross-validation is not needed anymore. To quote your paper, in the Discussion section: 
> 
> "To address the use of R free as indicator of wrong structures, we repeated the Kleywegt and Jones experiment (Kleywegt & Jones, 1995; Kleywegt & Jones, 1997) and built the 2ahn structure in the reverse direction and then refined it in the absence of solvent using the ML CV and ML FK approaches. Fig. 9 shows that Rfree stayed around 50% and Rfree–Rwork around 15% in the case of the reverse structure regardless of the ML approach and the fraction of data used in the test set. These values indicate that there is a fundamental problem with the structure, which supports the further use of Rfree as an indicator."
> 
> Thank you for reaffirming the utility of the statistical tool of cross-validation. The reverse chain trace of 2ahn is admittedly an extreme case of misfitting, and would probably be detected with other validation tools as well these days. However, the danger of overfitting or misfitting is still a very real possibility for large structures, especially when only moderate to low resolution data are available, even with today's tools.
> 
> Cross-validation can help even at very low resolution: in Structure 20, 957-966 (2012) we showed that cross-validation is useful for certain low resolution refinements where additional restraints (DEN restraints in that case) are used to reduce overfitting and obtain a more accurate structure. Cross-validation made it possible to detect overfitting of the data when no DEN restraints were used. We believe this should also apply when other types of restraints are used (e.g., reference model restraints in phenix.refine, REFMAC, or BUSTER).  
> 
> In summary, we believe that cross-validation remains an important (and conceptually simple) method to detect overfitting and for overall structure validation.
> 
> Axel
> 
> Axel T. Brunger
> Professor and Chair, Department of Molecular and Cellular Physiology
> Investigator, HHMI
> Email: [log in to unmask]
> Phone: 650-736-1031
> Web: http://atbweb.stanford.edu
> 
> Paul
> 
> Paul Adams
> Deputy Division Director, Physical Biosciences Division, Lawrence Berkeley Lab
> Division Deputy for Biosciences, Advanced Light Source, Lawrence Berkeley Lab
> Adjunct Professor, Department of Bioengineering, U.C. Berkeley
> Vice President for Technology, the Joint BioEnergy Institute
> Laboratory Research Manager, ENIGMA Science Focus Area
> 
> Tel: 1-510-486-4225, Fax: 1-510-486-5909
> 
> http://cci.lbl.gov/paul
>> On Jun 5, 2015, at 2:18 AM, Gerard Bricogne <[log in to unmask]> wrote:
>> 
>> Dear Dusan,
>> 
>>     This is a nice paper and an interestingly different approach to
>> avoiding bias and/or quantifying errors - and indeed there are all
>> kinds of possibilities if you have a particular structure on which you
>> are prepared to spend unlimited time and resources.
>> 
>>     The specific context in which Graeme's initial question led me to
>> query instead "who should set the FreeR flags, at what stage and on
>> what basis?" was that of the data analysis linked to high-throughput
>> fragment screening, in which speed is of the essence at every step. 
>> 
>>     Creating FreeR flags afresh for each target-fragment complex
>> dataset without any reference to those used in the refinement of the
>> apo structure is by no means an irrecoverable error, but it will take
>> extra computing time to let the refinement of the complex adjust to a
>> new free set, starting from a model refined with the ignored one. It
>> is in order to avoid the need for that extra time, or for a recourse
>> to various debiasing methods, that the "book-keeping faff" described
>> yesterday has been introduced. Operating without it is perfectly
>> feasible, it is just likely to not be optimally direct.
>> 
>>     I will probably bow out here, before someone asks "How many
>> [e-mails from me] is too many?" :-) .
>> 
>> 
>>     With best wishes,
>> 
>>          Gerard.
>> 
>> --
>> On Fri, Jun 05, 2015 at 09:14:18AM +0200, dusan turk wrote:
>>> Graeme,
>>> one more suggestion. You can avoid all the recipes by use all data for WORK set and 0 reflections for TEST set regardless of the amount of data by using the FREE KICK ML target. For explanation see our recent paper Praznikar, J. & Turk, D. (2014) Free kick instead of cross-validation in maximum-likelihood refinement of macromolecular crystal structures. Acta Cryst. D70, 3124-3134. 
>>> 
>>> Link to the paper you can find at “http://www-bmb.ijs.si/doc/references.HTML”
>>> 
>>> best,
>>> dusan
>>> 
>>> 
>>> 
>>>> On Jun 5, 2015, at 1:03 AM, CCP4BB automatic digest system <[log in to unmask]> wrote:
>>>> 
>>>> Date:    Thu, 4 Jun 2015 08:30:57 +0000
>>>> From:    Graeme Winter <[log in to unmask]>
>>>> Subject: Re: How many is too many free reflections?
>>>> 
>>>> Hi Folks,
>>>> 
>>>> Many thanks for all of your comments - in keeping with the spirit of the BB
>>>> I have digested the responses below. Interestingly I suspect that the
>>>> responses to this question indicate the very wide range of resolution
>>>> limits of the data people work with!
>>>> 
>>>> Best wishes Graeme
>>>> 
>>>> ===================================
>>>> 
>>>> Proposal 1:
>>>> 
>>>> 10% reflections, max 2000
>>>> 
>>>> Proposal 2: from wiki:
>>>> 
>>>> http://strucbio.biologie.uni-konstanz.de/ccp4wiki/index.php/Test_set
>>>> 
>>>> including Randy Read "recipe":
>>>> 
>>>> So here's the recipe I would use, for what it's worth:
>>>> <10000 reflections:        set aside 10%
>>>>  10000-20000 reflections:  set aside 1000 reflections
>>>>  20000-40000 reflections:  set aside 5%
>>>>> 40000 reflections:        set aside 2000 reflections
>>>> 
>>>> Proposal 3:
>>>> 
>>>> 5% maximum 2-5k
>>>> 
>>>> Proposal 4:
>>>> 
>>>> 3% minimum 1000
>>>> 
>>>> Proposal 5:
>>>> 
>>>> 5-10% of reflections, minimum 1000
>>>> 
>>>> Proposal 6:
>>>> 
>>>>> 50 reflections per "bin" in order to get reliable ML parameter
>>>> estimation, ideally around 150 / bin.
>>>> 
>>>> Proposal 7:
>>>> 
>>>> If lots of reflections (i.e. 800K unique) around 1% selected - 5% would be
>>>> 40k i.e. rather a lot. Referees question use of > 5k reflections as test
>>>> set.
>>>> 
>>>> Comment 1 in response to this:
>>>> 
>>>> Surely absolute # of test reflections is not relevant, percentage is.
>>>> 
>>>> ============================
>>>> 
>>>> Approximate consensus (i.e. what I will look at doing in xia2) - probably
>>>> follow Randy Read recipe from ccp4wiki as this seems to (probably) satisfy
>>>> most of the criteria raised by everyone else.
>>>> 
>>>> 
>>>> 
>>>> On Tue, Jun 2, 2015 at 11:26 AM Graeme Winter <[log in to unmask]>
>>>> wrote:
>>>> 
>>>>> Hi Folks
>>>>> 
>>>>> Had a vague comment handed my way that "xia2 assigns too many free
>>>>> reflections" - I have a feeling that by default it makes a free set of 5%
>>>>> which was OK back in the day (like I/sig(I) = 2 was OK) but maybe seems
>>>>> excessive now.
>>>>> 
>>>>> This was particularly in the case of high resolution data where you have a
>>>>> lot of reflections, so 5% could be several thousand which would be more
>>>>> than you need to just check Rfree seems OK.
>>>>> 
>>>>> Since I really don't know what is the right # reflections to assign to a
>>>>> free set thought I would ask here - what do you think? Essentially I need
>>>>> to assign a minimum %age or minimum # - the lower of the two presumably?
>>>>> 
>>>>> Any comments welcome!
>>>>> 
>>>>> Thanks & best wishes Graeme
>>>>> 
>>>> 
>>> 
>>> Dr. Dusan Turk, Prof.
>>> Head of Structural Biology Group http://bio.ijs.si/sbl/ 
>>> Head of Centre for Protein  and Structure Production
>>> Centre of excellence for Integrated Approaches in Chemistry and Biology of Proteins, Scientific Director
>>> http://www.cipkebip.org/
>>> Professor of Structural Biology at IPS "Jozef Stefan"
>>> e-mail: [log in to unmask]    
>>> phone: +386 1 477 3857       Dept. of Biochem.& Mol.& Struct. Biol.
>>> fax:   +386 1 477 3984       Jozef Stefan Institute
>>>                            Jamova 39, 1 000 Ljubljana,Slovenia
>>> Skype: dusan.turk (voice over internet: www.skype.com
> 

Dr. Dusan Turk, Prof.
Head of Structural Biology Group http://bio.ijs.si/sbl/ 
Head of Centre for Protein  and Structure Production
Centre of excellence for Integrated Approaches in Chemistry and Biology of Proteins, Scientific Director
http://www.cipkebip.org/
Professor of Structural Biology at IPS "Jozef Stefan"
e-mail: [log in to unmask]    
phone: +386 1 477 3857       Dept. of Biochem.& Mol.& Struct. Biol.
fax:   +386 1 477 3984       Jozef Stefan Institute
                            Jamova 39, 1 000 Ljubljana,Slovenia
Skype: dusan.turk (voice over internet: www.skype.com