Print

Print


Hi Ethan,

> > mainly because (a) the calculation of likelihood is only based on a 
> > subset of the 'data' that are obtained from an X-ray diffraction 
> > experiment (for example, we ignore diffuse scattering as Ian 
> > pointed-out),
> 
> I do not think that is a valid criticism.  In any field of science 
> one might hypothesize that conducting a different kind of experiment
> and fitting it in accordance with a different theory would produce
> a different model.  But that is only a hypothetical;  it does not
> invalidate the analysis of the experiment you did do based on the
> data you did collect.

For the example I mentioned (diffuse scattering), the experiment would be 
identical. Although using only subset of the available information may not 
invalidate the analysis performed, still it is not the best that can be 
done with the data in hand.


> > (b) we consciously avoid 'prior' because this would make the models 
> > 'subjective', meaning that better informed people would deposit (for 
> > the same data) different models than the less well informed,
> 
> I don't know of anyone who consciously avoids using their prior 
> knowledge to inform their current work.  But yes, people with more 
> experience may in the end deposit better models than people with little 
> experience.  That's why it is valuable to have automated tools like 
> Molprobity to check a proposed model against established prior 
> expectations.  It's also one way this bulletin board is value, because 
> it allows those with less experience to ask advice from those with more 
> experience.

Most people would like to think that the models they deposit correspond to 
an 'objective' representation of the experimentally accessible physical 
reality. The validation tools, mainly by enforcing a uniformity of 
interpretation, discourage (and not encourage) the incorporation in the 
model of prior knowledge about the problem at hand, and thus, offer to 
their users the safety of an 'objectively validated model'.



> > (c) the format of the PDB does not offer much room for 'creative 
> > interpretations' of the electron density maps [for example, you can't 
> > have discrete disorder on the backbone (or has this changed ?)].
> 
> Could you expand on this point?  
> I am not aware of any restriction on multiple backbone conformations,
> now or ever.   It is true that our refinement programs have not always
> been very well suited to refine such a model, but that is not a fault
> of the PDB format.

I stand corrected on that. It was probably just me :-)



> > I sense that what is being deposited is not the 'best model' in any 
> > conceivable way, but the model that 'best' accounts for the final 
> > 2mFo-DFc map within the limitations of the program used for the final 
> > refinement.
> 
> That would be true if the refinement is conducted in real space.
> However, it is nearly universal to do the final refinement in
> reciprocal space.

The emphasis of what I said was clearly on model building, and not on the 
refinement methodology. The reference to the refinement program was again 
model-centric (ranging from the treatment of hydrogens, to the bulk 
solvent model used).


Best regards,
Nicholas


-- 


          Dr Nicholas M. Glykos, Department of Molecular Biology
     and Genetics, Democritus University of Thrace, University Campus,
  Dragana, 68100 Alexandroupolis, Greece, Tel/Fax (office) +302551030620,
    Ext.77620, Tel (lab) +302551030615, http://utopia.duth.gr/~glykos/