Pavel,
> Simply not true. Think why -:) Hint: in restrained refinement the
> weight applies to all terms - bonds, angles, torsions, etc... So if
> you choose tight weight in such refinement the torsions will be
> restrained as tightly as other terms (at least as it would be in CNS
> or phenix.refine). In torsion angle refinement (which is, in fact, a
> constrained rigid-body refinement) you still have weights, and you can
> make your torsion angle refinement as tight as you like.
I may be wrong, but here are relevant lines from $CNS_TOPPAR/protein.top
and $CNS_TOPPAR/protein_rep.param:
ATOM N TYPE=NH1 CHARge=-0.35 END
ATOM CA TYPE=CH1E CHARge= 0.10 END
ATOM C TYPE=C CHARge= 0.55 END
...
ADD DIHEdral -C +N +CA +C
...
dihe X CH1E NH1 X 0.0 3 0.0 ! phi angle
The last line is from the parameter file section labeled "free
dihedrals".
Here is from ener_lib.cif (CCP4) torsions section
. CH1 NH1 . . 0.000 0.000 3 # AMBER
. C CH1 . . 0.000 180.000 3 # AMBER
. CH2 NH1 . . 0.000 0.000 3 # AMBER
. C CH2 . . 0.000 180.000 3 # AMBER
. CH1 N . . 0.000 0.000 3 # AMBER
Which suggests that (0.000 means that Edihe is always 0) phi/psi angles
are never restrained by refmac either. There is of course TRANS and CIS
in standard links, but I am confused if they are applied by default
("CONNECTIVITY No" seems to imply that it's not the case). What I
understand from this
http://www.phenix-online.org/pipermail/phenixbb/2007-July/000355.html
is that (default is discard_psi_phi=True) phenix doesn't restrain them
by default. Which it shouldn't as it was argued many times that
restraining phi/psi would make ramachandran map useless as validation
tool.
So unless you purposefully deviate from default behavior, phi/psi will
not be restrained (chi angles will though).
Also, at least in my understanding, tightly restrained doesn't mean
constrained. Which it should be to make it "equivalent" to rigid body.
I guess the point you were trying to make is that infinitely strong
restraints are equivalent to constraints. If that was your point, you
are absolutely right. However, I did not suggest that individual
B-factors should be infinitely restrained at low resolution. Again,
tightly restrained is not constrained. My point was (and is) that
"with properly chosen restraints individual B-factor refinement is
applicable at 3.1A resolution and, as it appears from results shown by
Jose Antonio, may be better than two-adp-groups-per-residue refinement".
> I don't see why two-B per residue wouldn't capture this distribution
> throughout the structure (it definitely wouldn't throughout the
> residue).
Because it generates discontinuity. I hope most reasonable people could
agree that in a lysine NZ would have higher B-factor than CB (most of
the time, there could be some exceptions with salt bridges combined with
severe backbone disorder). two-B-per-residue forces both of these atoms
to have the same B-factor, underestimating NZ and overestimating CB. I
think at any resolution gradual increase along the sidechain could be
better description of disorder.
You are right to point out that grouped B-factor can capture some of the
B-factor variation, but just not as well as properly restrained
individual B-factors.
> > I think the example that Jose Antonio originally provided (at 3.1A, not
> > 4A) clearly demonstrates that it makes more sense to do properly
> > restrained individual B-factor refinement than
> > two-adp-groups-per-residue refinement. Do you disagree specifically on
> > this issue?
>
> Of course no.
Thank you.
> This is why when I reply on bb to questions like this: "which
> B-factor, group or individual, do I need to refine at say 3.1A
> resolution", I always suggest to run these refinement jobs and see
> which one gives the best result:
...
> This will give the conclusive, rock solid answer about which ADP
> parameterization and refinement protocol is good for given model and
> data. An alternative is an endless speculation.
See, I am not sure. The R/Rfree for some of the options may be
indistinguishable (which was the case with Jose Antonio's example). In
which case "endless speculation" turns into "what should I do based on
what is known about physics of the damn thing I am trying to model".
> As you see, in phenix.refine you can combine any B-factor refinement
> strategy with any (group, individual iso, aniso, tls), and apply it to
> any selected part of your structure. So, I assume at this point of the
> software automation, it is up to a smart researcher to decide which
> refinement strategy to use. You cannot blame the software for giving
> you the freedom to do what you may want to do.
I don't, and I am sorry if the impression was that I do... I see, I
said implementation in CNS and phenix which in some way makes it sound
like it's your fault. Sorry, I didn't mean that.
However, I still maintain that two-per-residue refinement should be
restrained to disallow these wild jumps. And thus the way it is
currently implemented may lead to the unrealistic model.
> I guess I'm going off this discussion - otherwise phenix.refine will
> get less new options in the future if I keep writing -:)
And I was just getting warmed up... :)
--
Edwin Pozharski, PhD, Assistant Professor
University of Maryland, Baltimore
----------------------------------------------
When the Way is forgotten duty and justice appear;
Then knowledge and wisdom are born along with hypocrisy.
When harmonious relationships dissolve then respect and devotion arise;
When a nation falls to chaos then loyalty and patriotism are born.
------------------------------ / Lao Tse /
|