Yes - I think you are right. We use "B factors" as mop-up-error factors. If the atoms are in the wrong place a very high B factor is a useful indicator that the atom should be deleted or moved! But you will probably need to do some hands-on correction to use the information
Eleanor



On 8 February 2016 at 10:18, Tristan Croll <[log in to unmask]> wrote:

Hi all,


The attached image depicts the weakest region of the 3.6 Angstrom structure I've been working on. The three maps shown are 2mFo-DFc at 1 sigma, from three different refinements. The purple one is the first, after extensive rebuilding and refinement using strictly a TLS-only B-factor model. Not strong, but after sharpening and cross-checking with its slightly better resolved NCS partner, enough to be happy with it. The green map is the result of taking the refined TLS-only model and further refining with individual B-factors. So far so good - the maps are more or less the same.


The blue surface is the current map, after multiple rounds of rebuilding in the (much) more strongly resolved regions, with TLS plus restrained individual B-factor refinement from a blank slate in between each round. It's looking... not so great.


This result make a lot of sense when I think about it further - but just to check if my reasoning is correct:


One way to look at refinement with a single overall B-factor is that you're implicitly "flattening" your model - increasing the contribution of the weakly resolving regions, and decreasing the contribution of the stronger regions - akin to adjusting the contrast in a photograph. That's reflected (no pun intended) in the maps becoming stronger in these areas and a general sharpening throughout, even if the R factors are 1-2% higher than with individual B-factors. Most importantly, though, I think it forces the refinement algorithms to pay more attention to the coordinates in these regions. Once these are refined to convergence in the TLS-only B-factor model, then it seems safe to introduce individual B-factors since the refinement will simply fall further into the current local minimum. But if the model is refined from scratch with individual B-factors, then it's much easier for the refinement to over-fit the strongly resolving regions, balanced by smearing out the weak ones - significantly reducing the interpretability of weaker regions and resulting in an overall poorer-quality model.


Does this make sense?


Best regards,


Tristan