> One of my points is that a lower R-values does not indicate a better
structure. The second point is that it is difficult to compare R-values from
different programs because they are calculated quite differently. If you
take a structure refined with program X and ask program Y to calculate its
R-value, it is going to be quite different (usually much higher) than the
one calculcate by program X, although they are calculate from the same
structure.
This is all at least partly correct, and in combination, what I said makes
perfect sense:
(a) it is consistent that a loose geometry lowers Rwork at the expense of a
larger gap to Rfree: This is clear from the table I provided. The gap is
fairly large, although the actual calculation of the expected R/Rfree ratio
is another issue (cf. Tickle/Cruikshank papers)
(b) upon tightening the restraints, Rfree decreases substantially, while
Rwork increases, thereby closing the gap. I do believe together, that means
something.
(c) that the same input file does, given the different protocols/procedures
in phenix and refmac, result not in the exact but still very similar values
for zero cycles ('initial') I actually find quite reassuring.
(d) that the automated Phenix restraint weights give very similar values for
the rmsds compared to refmac hand optimization, I find also consistent and
reassuring.
> A third point is that you ask how to automate refmac to get the same
results as with phenix
Seriously? Where did I ask for that? I simply do expect similar and
consistent refinement statistics when starting from the same model and the
same data, and applying a similar restraint weight optimization. Again, I
find it quite reassuring that it works and does show reasonable results.
>then you list some numbers which don't indicate how similar the results
between phenix and refmac really are.
ok so we agree to disagree on the definition of result, which in this
context, clearly were the global refinement statistics listed.
>I often meet scientists at workshops and conferences who don't distinguish
between a good structure and low R-values.
This again is totally beside the point, because we start from the same model
and same data.
If under these circumstances lower crossvalidation measures Rfree or -LLf do
not mean anything on a global scale, then I don't know - why bother.
Cheers, BR
|