Dear Crystallographers,
I have always wondered whether it would be possible generally and rigorously to quantify the amount of information in a series of measurements (crystallographic or otherwise), either absolutely (in bits?) or a least relatively. This would be especially useful in crystallography. For example, one could determine how much information is present in the dataset when integrated with no resolution limits, then see how the information content diminished as a function of cutoff. Also, in comparing two datasets with similar resolution but different B factors, the information distribution would be different, which might have ramifications.
In trying generally to fit data to various functions, information quantification might be more complicated, since some data points are "worth" more than others, for example near and far from the kd of a binding curve. In fitting a Fourier series to a 3D electron density function, however, this might be less important, since each of the reflections contributes to the entire 3D density function. I remember seeing a comment from James Holton here relating to this topic, in which he said with very-high-precision low resolution data, one can use B-sharpening to produce maps similar to those from higher-resolution data. It seems at least, then, that both precision and resolution are important for determining the goodness of a data set. But, as far as I know, there is no direct measure of information quantity in crystallography--perhaps there should be?
The case I have before me right now is how to compare truncated data to fully-measured data. Let's say, for the sake of argument, that a given crystal would have diffracted to 1.2 Angstrom, but was truncated by the detector to 1.7 Angstrom. How would the information content of this dataset compare to the fully-measured dataset? I guess this would depend on the B factor: the higher the B-factor, the more information would be in the lower-resolution bins. So, if most of the information is present before reaching the cutoff, perhaps the structure should be modelled similarly to a higher-resolution one, and perhaps with anisotropic B-factors?
Another question: if data are measured with high multiplicity to 1.7 resolution but are truncated, how does this compare to a 1.2 Angstrom but less-precisely measured dataset, in terms of information content?
It seems to me that the oft-rehearsed requirement of certain data:parameter ratios depends highly on the precision of the measurements (nothing novel here), so a measure of "information," rather than either a simple ratio or an empirically-based rule of thumb, might be the best guide in deciding which parameters to model.
JPK
*******************************************
Jacob Pearson Keller, PhD
Looger Lab/HHMI Janelia Research Campus
19700 Helix Dr, Ashburn, VA 20147
email: [log in to unmask]
*******************************************
|