Hi Manfred
> thanks a lot for your comments, since they raise some interesting
> points.
>
> R_pim should give the precision of the averaged measurement,
> hence the name. It will decrease with increasing data redundancy,
> obviously. The decrease will be proportional to the square root
> of the redundancy if only statistical errors or counting errors
> are present. If other things happen, such as for instance
> radiation damage, then you are introducing systematic errors,
> which will lead to either R_pim decreasing less than it should,
> or R_pim even increasing.
>
> This raises an important issue. As more and more images keep
> being added to a data set, could one decide at some point,
> when to add any further images?
This really is the point: in these days of fast data collection, I
assume that most people collect more frames than necessary for
completeness. At least, I always do. So the question is no longer "is
this data good enough" -- that you can test quickly enough with
downstream programs.
Rather, it is, "how many of the frames that I have should I include", so
that you don't have to run the same combination of downstream programs
for 20 combinations of frames.
Radiation damage is the key, innit. Sure, I can pat myself on the
shoulder by downweighting everything by 1/1-N -- so after 15 revolutions
of tetragonal crystal that'll give a brilliant Rpim, but the crystal
will be a cinder and the data presumably crap.
But it's the intermediate zone (1-2x completeness) where I need help,
but I don't see how Rpim is discriminatory enough.
phx.
|