Print

Print


Dear Peter, thank you very much for your reply.

>> 1.  I run BMS on my model space, and I get the results in the Model exceedance probability. Now, if I run again on the very same model space, I get different results, and so on. Why is that? Is this related to the RFX inference?
> 
> SPM has an implementations of RFX BMS which uses sampling, and another which avoids sampling using approximations (variational Bayes). I believe the DCM batch uses the sampling approach. The number of samples is usually enough to give stable results, but perhaps it isn't in your case. You'll find the number of samples coded on line 26 of the function spm_BMS_gibbs.m - feel free to increase this. 

I see, I actually notice that if I use FFX the problem disappears. However, even if I increase the number of samples, the problem remains for RFX. I guess this is due to the high number of models (>600) I’m considering, so I’ll probably implement the “family” approach.


> 
>> 2. Despite I find a single model that seems to "perform" better than the others, its probability is not glorious. It seems to me people tend to consider as brittle those situations where this probability is <.9. However, the relative nature of the outcome of BMS (which depends on the model space) makes me wondering whether I misinterpreted this criterion. Could anybody give me some insight on it?
> 
> Generally people like 0.95 for 'strong evidence' and 0.9 for 'positive evidence'. If you have a large number of models, you could group them into a smaller number of 'families', and compare the evidence for each family (there's an option for this in the batch). You could also do Bayesian Model Averaging, which takes a weighted average over the models' connection strengths (weighted by the free energy of each model). So you could write in your paper something like: '3 out of 10 models shared most of the probability density, and the weighted average of their connection strengths is shown in the figure’.

Yes, I knew about this “families” approach, but at first it didn’t seem to improve much my results. I’ll try again now that I have fixed some things in my model space (e.g. better definition of my families). However, I still have doubts on this criterion: even when you group your model space into families, it’s not for granted that you reach a .9 evidence. Should one keep partitioning (properly) the model space as long as s/he doesn’t reach the desired criterion? 

> 
>> 3. It's my understanding BMS relies on Free Energy (F), as SPM abandoned AIC and BIC. Let's assume I have 10 models, each one with a given value F. Is the best model the one with the lower score, even if negative? Sorry for the very naive question, but I didn't find any explicit statement about that. Put it in another way: if I have model A with F = -1.97 and model B with F = 0.34, is model A better than B, or the other way around?  
> 
> The sign doesn't matter - the best model has the most positive free energy - in your example, model B. For clarity you may like to view the free energies relative to the worst model (this makes them into log Bayes factors), eg plot(Fs - min(Fs)), where Fs is a vector of free energies.
> 
>> 4. If I run BMS and select the model with the highest posterior probability (from the Model Exceedance plot, so BMS.DCM.rfx.xp, if I'm right), this doesn't match with the model with the lowest Free Energy, neither with the one with the highest value (as I'm no longer sure of what should I look at - see point 3). How is that? I assume there is some other statistical process during BMS. Is it acceptable to select the "best fit" model basing on the Free Energy score alone?
> 
> You say the RFX XP doesn't fit with the highest free energy. But how are you computing the highest free energy? Summing over subjects? That's would be a fixed effects analysis, whereas the RFX model takes into account that different subjects may have had their data generated by different models. For more detail on this, check out https://www.sciencedirect.com/science/article/pii/S1053811909011999 and https://www.sciencedirect.com/science/article/pii/S1053811909002638 .


Ok, thanks! Actually, I see that using RFX the best model matches with the one with the highest SF score (from DCM struct), which I think makes perfect sense now.

> 
> Do let us know whether you have any further concerns / questions.
> 
> Best
> Peter