Hi Lijun
One important point your summary didn't cover - the test set may not
actually be the same even though you think it is! What I mean is that
the test sets for different crystals may have the same indices but may
not sample exactly the same points in reciprocal space either due to
cell parameter changes or to rigid-body movements, or in fact anything
which causes the crystals to be non-isomorphous. Mike's original
question I think referred to switching datasets when the crystals are
chemically identical, but I suspect this problem arises more often when
a ligand is soaked in and you want to re-refine the complex using the
refined apo model as a starting point with the new data - the question
is does it matter whether or not you use the 'same' test set (i.e. same
indices)?
We frequently observe quite large cell parameter changes (up to 10% in
extreme cases) on soaking and freezing (which are hardly reproducible
treatments of the crystals!). Let's say a cell parameter has changed by
only 2%, which is fairly typical for many ligand soaks. The question is
at what resolution does this cause a test set reflection in the
protein-ligand data to become 'contaminated' by a reflection in the
working set of the apo data which differs by 1 in the index in the
direction of that cell parameter? I think 'contamination' starts to
occur when the positions of the reflections in reciprocal space differ
by less than half a reciprocal cell length, i.e. the points in one
reciprocal lattice appear at points closer than the half-index positions
of the other lattice. If the change is 2% in the 'a' parameter that
means the shift in the lattice is 1 in 50 rlu's, or at h=25 for 1/2 rlu.
Let's say the cell parameter a=100 Ang, so what resolution is h=25? The
answer is 100/25 = 4 Ang, so that means all test set reflections with
resolution higher than 4 Ang are 'contaminated' by the working set of
the other crystal to a greater or lesser degree. If the data go to 2
Ang, that's ~ 90% of the data - and that's from only a 2% change.
So it's just as well that refinement to convergence removes the
resulting Rfree bias, since I suspect very few people resort to
'shaking' (or whatever extreme measures are recommended) their
protein-ligand models before refinement!
Cheers
-- Ian
> -----Original Message-----
> From: Lijun Liu [mailto:[log in to unmask]]
> Sent: 24 September 2009 19:00
> To: Ian Tickle
> Cc: [log in to unmask]
> Subject: Re: [ccp4bb] Rfree in similar data set
>
> Sticking to the same test set is a great and practical idea! It
lowers
> the chance to get biased from "self-validation". However, logically,
>
> 0) based on Bragg's law and HKL<->XYZ theory, no reflections are truly
> free from the others from the same crystal. But when the reflection
> number is larger than the number of parameters minus other restriction
> conditions, you have more degree of freedom, statistically.
Uncertainties
> contributed from many aspects increase the freedom. Even though,
free
> test set is not purely free.
>
> 1) if the refined model is optimal/quasi-optimal, the model is then
> supposed to be consistent to the data used for test set too; or the
model
> is far from optimal. In this regard, switching test sets/using
different
> test sets will not be a problem for this kind of "ideal" case---enough
> refinement cycles should be able to bring consistent models.
>
> 2) keeping the same reflections for test set will leave these
reflections
> lost any chance to contribute the minimization process in a direct
> fashion, which itself causes a kind of bias. From the point of data,
if
> any data are excluded, no matter randomly or not, from calculation,
> artificial bias would be resulted! If the initial model is biased,
it
> will be biased forever if it caused due to the exclusion of test set
(this
> sounds more true when with low resolution data).
>
> So, a judgement may need be based on your data! At the "end" (I mean
you
> are going to stop) of a smooth refinement, switching test sets should
not
> be a "huge" problem, or the model is too wrong!
>
> Back to Mike's question: I suggest you keep the same test set, since
your
> data were from the exactly same crystal. At least it saves your
> convergence time.
>
> Lijun Liu
>
> On Sep 24, 2009, at 10:24 AM, Ian Tickle wrote:
>
>
> -----Original Message-----
>
>
> From: Dale Tronrud [mailto:[log in to unmask]]
>
>
> Sent: 24 September 2009 17:21
>
>
> To: Ian Tickle
>
>
> Cc: [log in to unmask]
>
>
> Subject: Re: [ccp4bb] Rfree in similar data set
>
>
>
> While I agree with Ian on the theoretical level, in
practice
>
>
> people use free R's to make decisions before the
ultimate
> model
>
>
> is finished, and our refinement programs are still
limited in
>
>
> their abilities to find even a local minimum.
>
>
>
> I wasn't saying that Rfree is only useful for the ultimate
finished
> model. My argument also applies to all intermediate models; the
> criterion is that the refinement has converged against the
current
> working set, even if it is only an incomplete model, or if it is
> only to
> a local optimum. So it's perfectly possible to use Rfree for
> overfitting & other tests on intermediate models. The point is
that
> it
> doesn't matter how you arrived at that optimum (whether local or
> global), Rfree is a function only of the parameters at that
point,
> not
> of any previous history. If you arrived at that same local or
> global
> optimum via a path which didn't involve switching datasets
midway,
> you
> must get the same answer for Rfree, so I just don't see how it
can
> be
> biased one way and not biased the other. Note that this is
meant as
> a
> 'thought experiment', I'm not saying necessarily that it's
possible
> to
> perform this experiment in practice!
>
>
>
> On the automated level the test set is used,
sometimes, to
>
>
> determine bulk solvent parameter, and more importantly
to
> calibrate
>
>
> the likelihood calculations in refinement. If the test
set is
>
>
> not "free" the likelihood calculation will overestimate
the
>
>
> reliability
>
>
> of the model and I'm not confident that error will not
become
>
>
> a self-fulfilling prophecy. It is not useful to divine
> meaning
>
>
> from the free R until convergence is achieved, but the
test
>
>
> set is used from the first cycle.
>
>
>
> That is indeed a fair point, but I would maintain that the test
set
> becomes 'free', i.e. free of the memory of all previous models,
the
> first time you reach convergence, so the question of the effect
on
> sigmaA calculations, which use the test set, is only relevant to
the
> first refinement after switching test sets, thereafter it should
be
> irrelevant. Converging to a local or global optimum wipes out
all
> memory of previous models because the parameter values at that
> optimum
> are independent of any previous history, and so Rfree must be
the
> same
> for that optimum no matter what path you took to get there.
>
>
>
> Perhaps I'm in one of my more persnickety moods, but
every
>
>
> paper I've read about optimization algorithms say that
the
> method
>
>
> requires a number of iteration many times the number of
> parameters
>
>
> in the model. The methods used in refinement programs
are
> pretty
>
>
> amazing in their ability to drop the residuals with a
small
> number
>
>
> of cycles, but we are violating the mathematical
warranty on
>
>
> each and every one of them. A refinement program will
> produce
>
>
> a model that is close to optimal, but cannot be expected
to be
>
>
> optimal. Since we haven't seen an optimal model yet
it's hard
>
>
> to say how far we are off.
>
>
>
> I thought that for a quadratic approximation CG requires a
number of
> iterations that is not more than the number of parameters (not
that
> we
> ever use even that many iterations!)? Anyway that's a problem
in
> theory, but it's possible to refine until nothing more
'interesting'
> happens, i.e. further changes appear to be purely random and at
the
> level of rounding errors. Plotting the maximum shift of the
atom
> positions or B factors from one iteration to the next is a very
> sensitive way of detecting whether convergence has been
achieved;
> looking at changes in R factors or in RMSDs of bonds etc is a
bad
> way,
> since R factors are not sensitive to small changes and atoms can
> move in
> concert without affecting bond lengths etc. (or it may just be
the
> waters that are moving!).
>
> As a final point I would note that cell parameters frequently
vary
> by
> several % between crystals even from the same batch due to
> unavoidable
> variations in rates of freezing etc, so what you think are
> independent
> test set reflections may in reality overlap significantly in
> reciprocal
> space with working set reflections from another dataset anyway!
>
> -- Ian
>
>
> Disclaimer
> This communication is confidential and may contain privileged
> information intended solely for the named addressee(s). It may not be
used
> or disclosed except for the purpose for which it has been sent. If you
are
> not the intended recipient you must not review, use, disclose, copy,
> distribute or take any action in reliance upon it. If you have
received
> this communication in error, please notify Astex Therapeutics Ltd by
> emailing [log in to unmask] and destroy all copies of the
> message and any attached documents.
> Astex Therapeutics Ltd monitors, controls and protects all its
> messaging traffic in compliance with its corporate email policy. The
> Company accepts no liability or responsibility for any onward
transmission
> or use of emails and attachments having left the Astex Therapeutics
> domain. Unless expressly stated, opinions in this message are those
of
> the individual sender and not of Astex Therapeutics Ltd. The recipient
> should check this email and any attachments for the presence of
computer
> viruses. Astex Therapeutics Ltd accepts no liability for damage caused
by
> any virus transmitted by this email. E-mail is susceptible to data
> corruption, interception, unauthorized amendment, and tampering, Astex
> Therapeutics Ltd only send and receive e-mails on the basis that the
> Company is not liable for any such alteration or any consequences
thereof.
> Astex Therapeutics Ltd., Registered in England at 436 Cambridge
> Science Park, Cambridge CB4 0QA under number 3751674
>
>
>
> Cardiovascular Research Institute
> University of California, San Francisco
> 1700 4th Street, Box 2532
> San Francisco, CA 94158
> Phone & Fax: (415)514-2836
>
>
>
>
>
Disclaimer
This communication is confidential and may contain privileged information intended solely for the named addressee(s). It may not be used or disclosed except for the purpose for which it has been sent. If you are not the intended recipient you must not review, use, disclose, copy, distribute or take any action in reliance upon it. If you have received this communication in error, please notify Astex Therapeutics Ltd by emailing [log in to unmask] and destroy all copies of the message and any attached documents.
Astex Therapeutics Ltd monitors, controls and protects all its messaging traffic in compliance with its corporate email policy. The Company accepts no liability or responsibility for any onward transmission or use of emails and attachments having left the Astex Therapeutics domain. Unless expressly stated, opinions in this message are those of the individual sender and not of Astex Therapeutics Ltd. The recipient should check this email and any attachments for the presence of computer viruses. Astex Therapeutics Ltd accepts no liability for damage caused by any virus transmitted by this email. E-mail is susceptible to data corruption, interception, unauthorized amendment, and tampering, Astex Therapeutics Ltd only send and receive e-mails on the basis that the Company is not liable for any such alteration or any consequences thereof.
Astex Therapeutics Ltd., Registered in England at 436 Cambridge Science Park, Cambridge CB4 0QA under number 3751674
|