Print

Print


Dear George,

thanks a lot! I see the point, that in reciprocal space refinement one 
could refine directly against the observed intensities and sigmas. But 
in principle, one could do iterative real space refinement, structure 
factor and intensity calculation for refinement statistics and weights, 
calculation of an improved electron density map (but that requires Fs 
again ...), and so forth until some convergence criterion is met. I 
wonder, which refinement scheme is more efficient.

The missing reflections in map calculation is something that we have to 
live with, and unless the data are severely incomplete, I must admit, 
that I don't worry too much.

The twinning problem is really severe! Here, I don't see how this could 
be done in a clever way in real space.

Interesting discussion ...

Best wishes,

Dirk.

Am 29.10.10 10:41, schrieb George M. Sheldrick:
> Dear Dirk,
>
> There are good reasons why real space refinement has never become popular.
> With reciprocal space refinement, you refine directly against what you
> measured, taking the standard uncertainly of each individual intensity
> into account. In this context I was pleased to read in CCP4bb that REFMAC
> will soon be refining against intensities (like SHELXL). Then the
> assumptions made (e.g. no distortion of the expected intensity distribution
> by e.g. NCS or twinning) and even 'bugs' in (c)truncate will no longer
> matter. If for some reason a reflection wasn't measured, then simply
> leaving it out it does not invalidate a recoprocal space refinement.
> The same applies to reflections that are reserved for Rfree.
>
> In contrast, the electron density is only theoretically correct if all
> reflections between 0,0,0 and infinity are included in the Fourier
> summation, For a twin it is even worse, because we don't know how to
> partition the difference between Fo^2 and Fc^2 between the twin
> components. None of the attempts to work around these problems are
> entirely convincing. Maps and real space refinement are invaluable in
> the intermediate stages of model building and correction, but the
> final refinement should be performed in reciprocal space.
>
> Best wishes, George
>
> Prof. George M. Sheldrick FRS
> Dept. Structural Chemistry,
> University of Goettingen,
> Tammannstr. 4,
> D37077 Goettingen, Germany
> Tel. +49-551-39-3021 or -3068
> Fax. +49-551-39-22582
>
>
> On Fri, 29 Oct 2010, Dirk Kostrewa wrote:
>
>> Hi Robbie,
>>
>> yes, the apparently larger radius of convergence in real space refinement
>> impresses me, too. Therefore, I usually do local real space refinement after
>> manually correcting errors, either with Moloc at lower resolution or with Coot
>> at higher resolution, prior to reciprocal space refinement.
>>
>> If I recall correctly, real space refinement was introduced by Robert Diamond
>> in the 60s long before reciprocal space refinement. In the 90s Michael Chapman
>> tried to revive it, but without much success, as far as I know. With the fast
>> computers today, maybe the time has come again for real space refinement ...
>>
>> Best regards,
>>
>> Dirk.
>>
>> Am 29.10.10 08:03, schrieb Robbie Joosten:
>>> Hi Bart,
>>>
>>> I agree with the building strategy you propose, but at some point it stops
>>> helping and a bit more attention to detail is needed. Reciprocal space
>>> refinement doesn't seem to do the fine details. It always surprises me how
>>> much atoms still move when you real-space refine a refined model, especially
>>> the waters. I admit this is not a fair comparison.
>>>
>>> High resolution data helps, but better data makes it tempting to put too
>>> little effort in optimising the model. I've seen some horribly obvious
>>> errors in hi-res models (more than 10 sigma difference density peaks for
>>> misplaced side chains). At the same time there are quite a lot of low-res
>>> models that are exceptionally good.
>>>
>>> Cheers,
>>> Robbie
>>>
>>>> Date: Thu, 28 Oct 2010 16:32:04 -0600
>>>> From: [log in to unmask]
>>>> Subject: Re: [ccp4bb] Against Method (R)
>>>> To: [log in to unmask]
>>>>
>>>> On 10-10-28 04:09 PM, Ethan Merritt wrote:
>>>>> This I can answer based on experience. One can take the coordinates
>>> from a structure
>>>>> refined at near atomic resolution (~1.0A), including multiple
>>> conformations,
>>>>> partial occupancy waters, etc, and use it to calculate R factors
>>> against a lower
>>>>> resolution (say 2.5A) data set collected from an isomorphous
>>> crystal. The
>>>>> R factors from this total-rigid-body replacement will be better
>>> than anything you
>>>>> could get from refinement against the lower resolution data. In
>>> fact, refinement
>>>>> from this starting point will just make the R factors worse.
>>>>>
>>>>> What this tells us is that the crystallographic residuals can
>>> recognize a
>>>>> better model when they see one. But our refinement programs are not
>>> good
>>>>> enough to produce such a better model in the first place. Worsr,
>>> they are not
>>>>> even good enough to avoid degrading the model.
>>>>>
>>>>> That's essentially the same thing Bart said, perhaps a little more
>>> pessimistic :-)
>>>>> cheers,
>>>>>
>>>>> Ethan
>>>>>
>>>> Not pessimistic at all, just realistic and perhaps even optimistic for
>>>> methods developers as apparently there is still quite a bit of progress
>>>> that can be made by improving the "search strategy" during refinement.
>>>>
>>>> During manual refinement I normally tell students not to bother about
>>>> translating/rotating/torsioning atoms by just a tiny bit to make it fit
>>>> better. Likewise there is no point in moving atoms a little bit to
>>>> correct a distorted bond or bond length. If it needed to move that
>>>> little bit the refinement program would have done it for you. Look for
>>>> discreet errors in the problematic residue or its neighbors: peptide
>>>> flips, 120 degree changes in side chain dihedrals, etc. If you can find
>>>> and fix one of those errors a lot of the stereochemical distortions and
>>>> non-ideal fit to density surrounding that residue will suddenly
>>>> disappear as well.
>>>>
>>>> The benefit of high resolution is that it is much easier to pick up and
>>>> fix such errors (or not make them in the first place)
>>>>
>>>> Bart
>>>>
>>>> --
>>>>
>>>> ============================================================================
>>>>
>>>> Bart Hazes (Associate Professor)
>>>> Dept. of Medical Microbiology&  Immunology
>>>> University of Alberta
>>>> 1-15 Medical Sciences Building
>>>> Edmonton, Alberta
>>>> Canada, T6G 2H7
>>>> phone: 1-780-492-0042
>>>> fax: 1-780-492-7521
>>>>
>>>> ============================================================================
>> -- 
>>
>> *******************************************************
>> Dirk Kostrewa
>> Gene Center Munich, A5.07
>> Department of Biochemistry
>> Ludwig-Maximilians-Universität München
>> Feodor-Lynen-Str. 25
>> D-81377 Munich
>> Germany
>> Phone: 	+49-89-2180-76845
>> Fax: 	+49-89-2180-76999
>> E-mail:	[log in to unmask]
>> WWW:	www.genzentrum.lmu.de
>> *******************************************************
>>
>>

-- 

*******************************************************
Dirk Kostrewa
Gene Center Munich, A5.07
Department of Biochemistry
Ludwig-Maximilians-Universität München
Feodor-Lynen-Str. 25
D-81377 Munich
Germany
Phone: 	+49-89-2180-76845
Fax: 	+49-89-2180-76999
E-mail:	[log in to unmask]
WWW:	www.genzentrum.lmu.de
*******************************************************