Print

Print



Hi, IMO preconceived notions of where to apply a resolution cut-off to the data are without theoretical foundation and most likely wrong.  You may decide empirically based on a sample of data what are the optimal cut-off criteria but that doesn't mean that the same criteria are generally applicable to other data.  Modern refinement software is now sufficiently advanced that the data are automatically weighted to enhance the effect of 'good' data on the results relative to that of 'bad' data.  Such a continuous weighting function is likely to be much more realistic from a probabilistic standpoint than the 'Heaviside' step function that is conventionally applied.  The fall-off in data quality with resolution is clearly gradual so why on earth should the weight be a step function?

Just my 2p.

Cheers

-- Ian


On 28 November 2015 at 11:21, Greenstone talis <[log in to unmask]> wrote:

Dear All,

 

I initially got a 3.0 A dataset that I used for MR and refinement. Some months later I got better diffracting crystals and refined the structure with a new dataset at 2.6 A (for this, I preserved the original Rfree set).

 

Even though I knew I was in a reasonable resolution limit already, I was curious and I processed the data to 1.8 A and used it for refinement (again, I preserved the original Rfree set)I was surprised to see that despite the worst numbers, the maps look better (pictures and some numbers attached).

 

2.6 A dataset: 

Rmeas: 0.167 (0.736)

I/sigma: 9.2 (2.2)

CC(1/2): 0.991 (0.718)

Completeness (%): 99.6 (99.7)

 

1.8 A dataset:

Resolution: 1.8 A

Rmeas: 0.247 (2.707)

I/sigma: 5.6 (0.3)

CC(1/2): 0.987  (-0.015)

Completeness (%): 66.7 (9.5)

 

 

I was expecting worst maps with the 1.8 A dataset...any explanations would be very appreciated.

 

Thank you,

Talis