Hi Gerard & Pavel
Isn't this the proviso I was referring to, that one cannot in practice
use an infinite weight because of rounding errors in the target
function. The weight just has to be 'big enough' such that the
restraint residual becomes sufficiently small that it's no longer
significant.
In numerical constrained optimisation the method of increasing the
constraint weights (a.k.a. 'penalty coefficients') until the
constraint violations are sufficiently small is called the 'penalty
method', see http://en.wikipedia.org/wiki/Penalty_method . The method
where you substitute some of the parameters using the constraint
equations is called (you guessed it!) the 'substitution method', see
http://people.ucsc.edu/~rgil/Optimization.pdf . There are several
other methods, e.g. the 'augmented Lagrangian method' is very popular,
see http://www.ualberta.ca/CNS/RESEARCH/NAG/FastfloDoc/Tutorial/html/node112.html
. As in the penalty method, the AL method adds additional parameters
to be determined (the Lagrange multipliers, one per constraint)
instead of eliminating some parameters using the constraint equations;
however the advantage is that it removes the requirement that the
penalty coefficient be very big.
The point about all these methods of constrained optimisation is that
they are in principle only different ways of achieving the same
result, at least that's what the textbooks say!
And now after the penalties and substitutions it's time to blow the whistle ...
Cheers
-- Ian
On Wed, Sep 22, 2010 at 10:00 PM, Pavel Afonine <[log in to unmask]> wrote:
> I agree with Gerard. Example: it's unlikely to achieve a result of
> rigid-body refinement (when you refine six rotation/translation parameters)
> by replacing it with refining individual coordinates using infinitely large
> weights for restraints.
> Pavel.
>
>
> On 9/22/10 1:46 PM, Gerard DVD Kleywegt wrote:
>>
>> Hi Ian,
>>
>>> First, constraints are just a special case of restraints in the limit
>>> of infinite weights, in fact one way of getting constraints is simply
>>> to use restraints with very large weights (though not too large that
>>> you get rounding problems). These 'pseudo-constraints' will be
>>> indistinguishable in effect from the 'real thing'. So why treat
>>> restraints and constraints differently as far as the statistics are
>>> concerned: the difference is purely one of implementation.
>>
>> In practice this is not true, of course. If you impose "infinitely strong"
>> NCS restraints, any change to a thusly restrained parameter by the
>> refinement program will make the target function infinite, so effectively
>> your model will never change. This is very different from the behaviour
>> under NCS constraints and the resulting models in these two cases will in
>> fact be very easily distinguishable.
>>
>> --Gerard
>>
>> ******************************************************************
>> Gerard J. Kleywegt
>> Dept. of Cell & Molecular Biology University of Uppsala
>> Biomedical Centre Box 596
>> SE-751 24 Uppsala SWEDEN
>>
>> http://xray.bmc.uu.se/gerard/ mailto:[log in to unmask]
>> ******************************************************************
>> The opinions in this message are fictional. Any similarity
>> to actual opinions, living or dead, is purely coincidental.
>> ******************************************************************
>
|