Print

Print


My guess is that the integration is roughly the same, unless the profiles
are really poorly defined, but that the scaling that is suffering from
using a lot of high-resolution weak data. We've integrated data to say
I/sig = 0.5, and sometimes seem more problems with scaling. I then cut
back to I/sig = 1 and it's fine. The major difficulty arises that if the
crystal is dying, and the decay/scaling/absorption model isn't good
enough. So that's definately a consideration when trying to get a more
complete data set and higher resolution (so more redundancy).

Bernie


On Thu, March 22, 2007 12:21 pm, Jose Antonio Cuesta-Seijo wrote:
> I have observed something similar myself using Saint in a Bruker
> Smart6K detector and using denzo in lab and syncrotron detectors.
> First the I over sigma never really drops to zero, no mater how much
> over your real resolution limit you integrate.
> Second, if I integrate to the visual resolution limit of, say, 1.5A,
> I get nice dataset statistics. If I now re-integrate (and re-scale)
> to 1.2A, thus including mostly empty (background) pixels everywhere,
> then cut the dataset after scaling to the same 1.5A limit, the
> statistics are much worse, booth in I over sigma and Rint. (Sorry, no
> numbers here, I tried this sometime ago).
> I guess the integration is suffering at profile fitting level while
> the scaling suffers from general noise (those weak reflections
> between 1.5A and 1.2A will be half of your total data!).
> I would be careful to go much over the visual resolution limit.
> Jose.
>
> **************************************
> Jose Antonio Cuesta-Seijo
> Cancer Genomics and Proteomics
> Ontario Cancer Institute, UHN
> MaRs TMDT Room 4-902M
> 101 College Street
> M5G 1L7 Toronto, On, Canada
> Phone:  (416)581-7544
> Fax: (416)581-7562
> email: [log in to unmask]
> **************************************
>
>
> On Mar 22, 2007, at 10:59 AM, Sue Roberts wrote:
>
>> I have a question about how the experimental sigmas are affected
>> when one includes resolution shells containing mostly unobserved
>> reflections.  Does this vary with the data reduction software being
>> used?
>>
>> One thing I've noticed when scaling data (this with d*trek (Crystal
>> Clear) since it's the program I use most) is that I/sigma(I) of
>> reflections can change significantly when one changes the high
>> resolution cutoff.
>>
>> If I set the detector so that the edge is about where I stop seeing
>> reflections and integrate to the corner of the detector, I'll get a
>> dataset where I/sigma(I) is really compressed - there is a lot of
>> high resolution data with I/sigma(I) about 1, but for the lowest
>> resolution shell, the overall I/sigma(I) will be maybe 8-9.  If the
>> data set is cutoff at a lower resolution (where I/sigma(I) in the
>> shell is about 2) and scaled, I/sigma(I) in the lowest resolution
>> shell will be maybe 20 or even higher (OK, there is a different
>> resolution cutoff for this shell, but if I look at individual
>> reflections, the trend holds).  Since the maximum likelihood
>> refinements use sigmas for weighting this must affect the
>> refinement.  My experience is that interpretation of the maps is
>> easier when the cut-off datasets are used. (Refinement is via
>> refmac5 or shelx).  Also, I'm mostly talking about datasets from
>> well-diffracting crystals (better than 2 A).
>>
>> Sue
>>
>>
>> On Mar 22, 2007, at 2:29 AM, Eleanor Dodson wrote:
>>
>>> I feel that is rather severe for ML refinement - sometimes for
>>> instance it helps to use all the data from the images, integrating
>>> right into the corners, thus getting a very incomplete set for the
>>> highest resolution shell.  But for exptl phasing it does not help
>>> to have many many weak reflections..
>>>
>>> Is there any way of testing this though? Only way I can think of
>>> to refine against a poorer set with varying protocols, then
>>> improve crystals/data and see which protocol for the poorer data
>>> gave the best agreement for the model comparison?
>>>
>>> And even that is not decisive - presumably the data would have
>>> come from different crystals with maybe small diffs between the
>>> models..
>>> Eleanor
>>>
>>>
>>>
>>> Shane Atwell wrote:
>>>>
>>>> Could someone point me to some standards for data quality,
>>>> especially for publishing structures? I'm wondering in particular
>>>> about highest shell completeness, multiplicity, sigma and Rmerge.
>>>>
>>>> A co-worker pointed me to a '97 article by Kleywegt and Jones:
>>>>
>>>> _http://xray.bmc.uu.se/gerard/gmrp/gmrp.html_
>>>>
>>>> "To decide at which shell to cut off the resolution, we nowadays
>>>> tend to use the following criteria for the highest shell:
>>>> completeness > 80 %, multiplicity > 2, more than 60 % of the
>>>> reflections with I > 3 sigma(I), and Rmerge < 40 %. In our
>>>> opinion, it is better to have a good 1.8 Å structure, than a poor
>>>> 1.637 Å structure."
>>>>
>>>> Are these recommendations still valid with maximum likelihood
>>>> methods? We tend to use more data, especially in terms of the
>>>> Rmerge and sigma cuttoff.
>>>>
>>>> Thanks in advance,
>>>>
>>>> *Shane Atwell*
>>>>
>>
>> Sue Roberts
>> Biochemistry & Biopphysics
>> University of Arizona
>>
>> [log in to unmask]
>