Dear Pavel,
The Fcalc from reflections used during refinement are subject to
overfitting and thus give poor estimates for ML parameters. The
reflections used for Rfree ideally are free from the effect of
overfitting (unless there are interreflection dependencies as recently
discussed on ccp4bb) and using Fcalc (hkl_free) provides better ML
parameters. Our article shows that the Rcomplete method provides
Fcalc(hkl_complete) for the entire data set with the same beneficial
properties as Fcalc (hkl_free), but not only for the test set but for
the entire data set. With the Rcomplete method you can use the entire
data set for estimating ML parameters.
Somebody mentioned to me, although he was not entirely sure, that ML
parameters are estimated from the correlation coefficients between Fobs
and Fcalc. With our method you get Fcalc for each unique reflection of
the data set. These Fcalc are as little affected by overfitting as the
'free' reflections. Therefore the Rcomplete method is especially
important for data sets with few reflections. There are many scenarios
in crystallography, listed in the introduction, with small data sets.
With less than, say, 10,000 reflections you may not want to set 2000
reflections aside for ML based refinement. Based on Rcomplete you can
use the entire data set for refinement and still benefit from the ML
estimates free from overfitting.
Your second question: run a cycle of minimisation, calculate Rcomplete
to re-estimate ML parameters from Fcalc for the entire data set, run the
next cycle with improved ML parameters.
This sounds time consuming, but bear in mind that Rcomplete addresses
those cases where there are too few reflections to rely on Rfree. The
less reflections the faster each cycle.
Convergence: I considered a refinement as converged when further cycles
would not introduce significant changes. With good models and good data
shelxl (which I used for this study) shows zero shifts for the refined
parameters after a few refinement cycles. High resolution structures
show uncertainties in the range of 10^-4 for fractional coordinates and
10^-3 for U-values, so I looked at the values printed by shelxl on the
command line after each cycle.
With less well defined structures there are often fluctuating atoms,
e.g. from optimistically placed water molecules or side chains. In such
cases I stopped refinement when I had the impression the maximum shifts
were only going back and forth (In a real refinement project I would
then consider removing such atoms).
Our results indicate that random parameter perturbation does not really
affect the reduction of the memory effect, it is really the continued
refinement. This has the great advantage that you are talking about an
Rcomplete for one structure. As you stir parameters randomly and refine
several times, I have observed many times that the resulting structures
differ quite a lot (see e.g. Fig. S4), so one cannot really speak of
_the_ structure.
Your last question: the values of Rcomplete and Rfree are pretty much
the same (except for very small test sets, below 5 reflections, when on
average Rfree goes up), so you can stick to what you are used to.
Best wishes,
Tim
On 07/14/2015 12:58 AM, Pavel Afonine wrote:
> Hi Tim,
>
> I glanced through the paper, thanks for pointing out! I have a few
> questions (I'm sorry in advance if this is addressed in the paper and I
> missed it!):
>
> - Specifically what reflections you suggest to use to calculate alpha and
> beta (or sigma-a) parameters for ML function? Presently test set
> reflections are required for this.
>
> - In one of his papers Randy Read has demonstrated that 2mFo-DFc map (with
> m and D calculated using test reflections) is substantially less model
> biased than 2Fo-Fc map. So how we go about this within your formalism (this
> basically echoes my first question: what reflections to be used to
> calculate m and D?)?
>
> - In Methods section you write "The model should have been refined against
> the entire dataset until convergence.". 1) How convergence is determined?
> 2) What minimum it needs to be in and does it matter?
>
> - What is a "rule of thumb" for Rcomplete values? I mean we kind of know
> that Rw/Rf ~20/25% is ok for a 2A resolution data set and ~30/35% is not
> ok... So what about Rcomplete?
>
> All the best,
> Pavel
>
> On Mon, Jul 13, 2015 at 1:57 PM, Tim Gruene <[log in to unmask]> wrote:
>
>> Dear Lu,
>>
>> when you only alter the ligand, the intensities between two structures
>> are probably quite similar. Hence when you only exchange the ligand and
>> choose a different set of reflections as Rfree, those free reflections
>> were previously used for refinenement, i.e. Rfree might suffer from
>> model bias. It was believed that Rfree does not properly cross-validate
>> your structure in this case.
>>
>> However, Ian Tickle stated (probably more than once) that when you
>> remove a set of reflections from the data and refine until convergence,
>> those reflections set aside do not suffer from the memory effect, i.e.
>> they are 'freed' by refinement. I call this 'Tickle's conjecture' and we
>> investigated it with a set of experiments. According to our results
>> (http://www.pnas.org/content/early/2015/07/02/1502136112) Tickle's
>> conjecture holds true so that you can reassign any set as Rfree as long
>> as you refine to convergence (within numerical precision). Following our
>> results, if you only have a small'ish data set, this publications
>> seconds Axel Brunger's recommendation to use all reflections for
>> refinement and calculate Rcomplete instead of Rfree. It's based on all
>> reflections and shows as little bias as Rfree.
>>
>> Cheers,
>> Tim
>>
>> On 07/13/2015 04:15 PM, luzuok wrote:
>>>
>>>
>>> Dear ccp4bb members,
>>>
>>>
>>> It's said that when choosing R free set of protein-ligand complex
>> data set, it is better to use the same reflections as the native one(if
>> have). Could anybody provide any detail or references about why we should
>> do so?
>>>
>>> Best regards!
>>>
>>> Lu
>>>
>>>
>>>
>>>
>>>
>>> --
>>>
>>> 卢作焜
>>> 南开大学新生物站A202
>>>
>>>
>>> Lu Zuokun, Ph.D. Candidate
>>> College of Life Science, Nankai University
>>>
>>
>> --
>> --
>> Dr Tim Gruene
>> Institut fuer anorganische Chemie
>> Tammannstr. 4
>> D-37077 Goettingen
>> phone: +49 (0)551 39 22149
>>
>> GPG Key ID = A46BEE1A
>>
>>
>>
>
--
--
Dr Tim Gruene
Institut fuer anorganische Chemie
Tammannstr. 4
D-37077 Goettingen
phone: +49 (0)551 39 22149
GPG Key ID = A46BEE1A
|