Dear experts,
I'm performing BMS on my data and I'm getting some puzzling results. Also, there are few very basic theoretical point I'm probably missing (even from the manual and papers):
1. I run BMS on my model space, and I get the results in the Model exceedance probability. Now, if I run again on the very same model space, I get different results, and so on. Why is that? Is this related to the RFX inference?
2. Despite I find a single model that seems to "perform" better than the others, its probability is not glorious. It seems to me people tend to consider as brittle those situations where this probability is <.9. However, the relative nature of the outcome of BMS (which depends on the model space) makes me wondering whether I misinterpreted this criterion. Could anybody give me some insight on it?
3. It's my understanding BMS relies on Free Energy (F), as SPM abandoned AIC and BIC. Let's assume I have 10 models, each one with a given value F. Is the best model the one with the lower score, even if negative? Sorry for the very naive question, but I didn't find any explicit statement about that. Put it in another way: if I have model A with F = -1.97 and model B with F = 0.34, is model A better than B, or the other way around?
4. If I run BMS and select the model with the highest posterior probability (from the Model Exceedance plot, so BMS.DCM.rfx.xp, if I'm right), this doesn't match with the model with the lowest Free Energy, neither with the one with the highest value (as I'm no longer sure of what should I look at - see point 3). How is that? I assume there is some other statistical process during BMS. Is it acceptable to select the "best fit" model basing on the Free Energy score alone?
Thank you very much in advance for your time!
s
|