Dear Shijun,
For refinement you want to use as much good data as possible to get a good model. At the same time you want to make sure that your model is predictive for related data that was not used for making that model. This wat they call holdout validation and that is exactly the idea behind R-free. You have a work set on which you base your model, and a test set that you use to calculate how predictive the model is (R-free).
You can imagine that the actual model depends on which reflections you use, but as you are using a large fraction of the full set of data (90% or more) changing some of these reflections is not a big deal. For the test set, this is a big deal as it is relatively small. Say your tests set consists only of the 5% highest resolution reflections, how predictive would your model be? Is this still a good test of the validity of the model? Probably not, so we take a random set, but (nowadays) we make sure that the reflections is the work set and test set are not directly related, e.g. by twinning.
Now imagine you test set was tiny, say ten reflections. Would you get the same R-free as when you have ten different reflections? No. But your model is essentially the same, so apparently there is an error margin in R-free. It turns out from theory and from practical testing (done by at least, Bruenger, Kleywegt, Tickle, Joosten, and Gruene over many years and papers) that this error margin is inversely proportional with the actual size of the test set and also proportional with the magnitude of the R-factor.
It turns out that 1000 reflections is typically safe and as the average dataset consist of 50k reflections 5% works quite well. However, if you have fewer than 20k reflections, you might need to increase the test set fraction. The 10% upper limit is something that is mostly to avoid having too few work set reflection to make a good model. If the size of your data set forces you to have a test set with fewer than, say 500 reflections, you might need to take addition measures like performing k-fold cross validation (making k models with k non-overlapping test sets and then taking the average R-free) or by calculating R-complete (see the paper from Tim Gruene). Note that these two methods give very similar results.
Shameless plug: PDB-REDO automatically handles the test set issues for small datasets and calculates the average R-free and R-complete if your test set cannot be made larger than 500 reflections.
Cheers,
Robbie
> -----Original Message-----
> From: CCP4 bulletin board <[log in to unmask]> On Behalf Of ???
> Sent: Tuesday, July 10, 2018 10:03
> To: [log in to unmask]
> Subject: [ccp4bb] R-flag choose
>
> Dear all
>
> I am confusing the choose of free R-flags recently. Rfactor means calculatted
> from reflection not used in refinement,so what's big the difference between
> different percentage of R-flags,like it's about 5% in ccp4 -refmac, while it is
> 10% in phenix-refinement,what's the difference between them and how
> they affect the Rwork and Rfree values when do refinement. Thanks a lot !
>
> best regards
>
> shijun
>
>
> ________________________________
>
> To unsubscribe from the CCP4BB list, click the following link:
> https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=CCP4BB&A=1
########################################################################
To unsubscribe from the CCP4BB list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=CCP4BB&A=1
|