I feel that is rather severe for ML refinement - sometimes for instance
it helps to use all the data from the images, integrating right into the
corners, thus getting a very incomplete set for the highest resolution
shell. But for exptl phasing it does not help to have many many weak
reflections..
Is there any way of testing this though? Only way I can think of to
refine against a poorer set with varying protocols, then improve
crystals/data and see which protocol for the poorer data gave the best
agreement for the model comparison?
And even that is not decisive - presumably the data would have come from
different crystals with maybe small diffs between the models..
Eleanor
Shane Atwell wrote:
>
> Could someone point me to some standards for data quality, especially
> for publishing structures? I'm wondering in particular about highest
> shell completeness, multiplicity, sigma and Rmerge.
>
> A co-worker pointed me to a '97 article by Kleywegt and Jones:
>
> _http://xray.bmc.uu.se/gerard/gmrp/gmrp.html_
>
> "To decide at which shell to cut off the resolution, we nowadays tend
> to use the following criteria for the highest shell: completeness > 80
> %, multiplicity > 2, more than 60 % of the reflections with I > 3
> sigma(I), and Rmerge < 40 %. In our opinion, it is better to have a
> good 1.8 Å structure, than a poor 1.637 Å structure."
>
> Are these recommendations still valid with maximum likelihood methods?
> We tend to use more data, especially in terms of the Rmerge and sigma
> cuttoff.
>
> Thanks in advance,
>
> *Shane Atwell*
>
|