On Wed, 27 Oct 2010, Frank von Delft wrote:
>>>
>>> So, since the experimental error is only a minor contribution to the total
>>> error, it is arguably inappropriate to use it as a weight for each hkl.
>> I think your logic has run off the track. The experimental error is an
>> appropriate weight for the Fobs(hkl) because that is indeed the error
>> for that observation. This is true independent of errors in the model.
>> If you improve the model, that does not magically change the accuracy
>> of the data.
> Sorry, still missing something:
>
> In the weighted Rfactor, we're weighting by the 1/sig**2 (right?) And the
> reason for that is, presumably, that when we add a term (Fo-Fc) but the Fo is
> crap (huge sigma), we need to ensure we don't add very much of it -- so we
> divide the term by the huge sigma.
Correct.
> But what if Fc also is crap? Which it patently is: it's not even within 20%
> of Fo, never mind vaguely within sig(Fo). Why should we not be
> down-weighting those terms as well?
Because here we want the exact opposite. If Fc is hugely different from a
well-measured Fobs then it is a sensitive indicator of a problem with the model.
Why would we want to down-weight it?
Consider the extreme case: if you down-weight all reflections for which Fc does
not already agree with Fo, then you will always conclude that the current model,
no matter what random drawer you pulled it from, is in fine shape.
> Or can we ignore that because, since all terms are crap, we'd simply be
> down-weighting the entire Rw by a lot, and we'd be doing it for the Rw of
> both models we're comparing, so they'd cancel out when we take the ratio
> Rw1/Rw2?
Not sure I follow this.
> But if we're so happy to fudge away the huge gorilla in the room, why would
> we need to be religious about the little gnats on the floor (the sig(Fo))?
> Is there then really a difference between R1/R2 and Rw1/Rw2, for all
> practical purposes?
Yes. That was the message of the 1970 Ford & Rollet paper that Ian provided a
link for.
Ethan
> (Of course, this is all for the ongoing case we don't know how to model the
> R-factor gap. And no, I haven't played with actual numbers...)
>
> phx.
>
|