Refmac has made a lot of progress in refining individual B-factors at very
low resolution. At such resolution, the procedure is not always stable;
however, with some effort, we have found a considerable improvement of
R-free in all our 4-5 A resolution limit data.
The same applies to the refinement of anisotropic B-factors, it may work
at much lower resolution (e.g. 2A) than typically assumed.
To achieve improvement in the above situations, we perform many
alternative strategies of the refinement. These strategies depend on the
version of Refmac and type of problem and quality of the model. The better
the model, the more improvement we observe.
Our experience may, or may not, apply to Phenix and other refinement
programs as the differences between programs are likely to be significant.
Our observations are consistent with past negative experience with regard
to B-factor refinement; however, in most situations we could find a
work-around.
Zbyszek Otwinowski
> There must be some middle ground here, in the form of a restraint scheme
> that allows one to gracefully reduce the effective parameter count as the
> resolution decreases, without "step changes" between different schemes.
> Perhaps by applying some variation on a LOWESS smooth to the B-factors,
> with the strength of the weighting term determined by the Wilson B factor?
>
>
> ________________________________
> From: Bjørn Pedersen <[log in to unmask]>
> Sent: Thursday, 11 February 2016 6:05 PM
> To: Tristan Croll
> Cc: [log in to unmask]
> Subject: Re: [ccp4bb] Individual B-factors at low resolution: a cautionary
> tale?
>
> Hi Tristan,
> I think your reasoning makes a lot of sense. I would be skeptical of the
> use of individual B-factors at this resolution of 3.6A in general, working
> with 'low-quality' anisotropic data etc. Regions with extremely high
> b-factors should help to warn you that you could be 'falling' into a local
> minimum hole.
> With think kind of data I often find it useful to 'reset' all B-factors in
> my current model to the estimated Wilson B-factor once in a while. That
> helps me to escape the local minimum of the model (along with multiple
> parallel rounds of simulated annealing tracking the R-factors of the
> low-resolution shells).
>
> The followup question I have is more along the lines of whether it would
> ever be justifiable to use individual B-factors at this resolution? I have
> always been a proponent of the one/two B-factors per residue in this
> resolution range, but I was wondering what the community thinks of this
> these days, with current improvements in target functions, parametizations
> etc etc?
>
> All the best
> -Bjørn
>
>
>
>
>
>
> On Mon, Feb 8, 2016 at 7:46 PM, Tristan Croll
> <[log in to unmask]<mailto:[log in to unmask]>> wrote:
>
> For the most part they haven't moved far (1-2 tenths of an Angstrom in the
> backbone, further for sidechains) - and secondary structure remains
> essentially unchanged, which is a plus. I think it will come good with a
> little more tinkering.
>
>
> ________________________________
> From: Eleanor Dodson
> <[log in to unmask]<mailto:[log in to unmask]>>
> Sent: Tuesday, 9 February 2016 4:28 AM
>
> To: Tristan Croll
> Cc: [log in to unmask]<mailto:[log in to unmask]>
> Subject: Re: [ccp4bb] Individual B-factors at low resolution: a cautionary
> tale?
>
> Yes - the really isnt enough information at that resolution to support the
> number of parameters with xy,z,b per atom. Do your atoms move away from
> their correct positions too?
>
> On 8 February 2016 at 18:22, Tristan Croll
> <[log in to unmask]<mailto:[log in to unmask]>> wrote:
>
>
> That's OK - I tend to be very hands-on with my corrections. :)
>
>
> But what has me interested is not so much this, but that refining with
> individual B-factors actually seems to end up obscuring the information
> that says where the wrong atoms should go! In this particular case I went
> through multiple rounds of rebuilding/refinement of this domain, where
> successive adjustments simultaneously improved fit to the map, resolved
> clashes and improved the secondary structure, and each refinement with a
> TLS-only model led to sharper and stronger density. Then with a few rounds
> of rebuilding elsewhere combined with individual B-factor refinement, it's
> all but gone. I think it really argues for the idea that at these
> resolutions the B-factor model should be kept as simple as possible while
> rebuilding, and only extended to individual B-factors (if at all) in a
> final round for deposition.
>
>
> Cheers,
>
>
> Tristan
>
> ________________________________
> From: Eleanor Dodson
> <[log in to unmask]<mailto:[log in to unmask]>>
> Sent: Tuesday, 9 February 2016 4:02 AM
> To: Tristan Croll
> Cc: [log in to unmask]<mailto:[log in to unmask]>
> Subject: Re: [ccp4bb] Individual B-factors at low resolution: a cautionary
> tale?
>
> Yes - I think you are right. We use "B factors" as mop-up-error factors.
> If the atoms are in the wrong place a very high B factor is a useful
> indicator that the atom should be deleted or moved! But you will probably
> need to do some hands-on correction to use the information
> Eleanor
>
>
>
> On 8 February 2016 at 10:18, Tristan Croll
> <[log in to unmask]<mailto:[log in to unmask]>> wrote:
>
> Hi all,
>
>
> The attached image depicts the weakest region of the 3.6 Angstrom
> structure I've been working on. The three maps shown are 2mFo-DFc at 1
> sigma, from three different refinements. The purple one is the first,
> after extensive rebuilding and refinement using strictly a TLS-only
> B-factor model. Not strong, but after sharpening and cross-checking with
> its slightly better resolved NCS partner, enough to be happy with it. The
> green map is the result of taking the refined TLS-only model and further
> refining with individual B-factors. So far so good - the maps are more or
> less the same.
>
>
> The blue surface is the current map, after multiple rounds of rebuilding
> in the (much) more strongly resolved regions, with TLS plus restrained
> individual B-factor refinement from a blank slate in between each round.
> It's looking... not so great.
>
>
> This result make a lot of sense when I think about it further - but just
> to check if my reasoning is correct:
>
>
> One way to look at refinement with a single overall B-factor is that
> you're implicitly "flattening" your model - increasing the contribution of
> the weakly resolving regions, and decreasing the contribution of the
> stronger regions - akin to adjusting the contrast in a photograph. That's
> reflected (no pun intended) in the maps becoming stronger in these areas
> and a general sharpening throughout, even if the R factors are 1-2% higher
> than with individual B-factors. Most importantly, though, I think it
> forces the refinement algorithms to pay more attention to the coordinates
> in these regions. Once these are refined to convergence in the TLS-only
> B-factor model, then it seems safe to introduce individual B-factors since
> the refinement will simply fall further into the current local minimum.
> But if the model is refined from scratch with individual B-factors, then
> it's much easier for the refinement to over-fit the strongly resolving
> regions, balanced by smearing out the weak ones - significantly reducing
> the interpretability of weaker regions and resulting in an overall
> poorer-quality model.
>
>
> Does this make sense?
>
>
> Best regards,
>
>
> Tristan
>
>
>
>
Zbyszek Otwinowski
UT Southwestern Medical Center at Dallas
5323 Harry Hines Blvd.
Dallas, TX 75390-8816
Tel. 214-645-6385
Fax. 214-645-6353
|