Print

Print


Dear Crystallographers,

One can see from many posts on this listserve that in any given x-ray 
diffraction experiment, there are more data than merely the diffraction 
spots. Given that we now have vastly increased computational power and data 
storage capability, does it make sense to think about changing the paradigm 
for model refinements? Do we need to "reduce" data anymore? One could 
imagine applying various functions to model the intensity observed at every 
single pixel on the detector. This might be unneccesary in many cases, but 
in some cases, in which there is a lot of diffuse scattering or other 
phenomena, perhaps modelling all of the pixels would really be more true to 
the underlying phenomena? Further, it might be that the gap in R values 
between high- and low-resolution structures would be narrowed significantly, 
because we would be able to model the data, i.e., reproduce the images from 
the models, equally well for all cases. More information about the nature of 
the underlying macromolecules might really be gleaned this way. Has this 
been discussed yet?

Regards,

Jacob Keller

*******************************************
Jacob Pearson Keller
Northwestern University
Medical Scientist Training Program
Dallos Laboratory
F. Searle 1-240
2240 Campus Drive
Evanston IL 60208
lab: 847.491.2438
cel: 773.608.9185
email: [log in to unmask]
*******************************************