Print

Print


Dear Ying

We can choose to compute BMA (Bayesian Model Averaging) for the winning family in conducting the family comparison. And then a file 'BMS.mat' which contains BMA results of each subject and of all subjects is generated. Mean DCM parameters in the group level are stored in BMS.DCM.rfx.bma.mEp in this file.

The posterior probabilities corresponding to mean DCM parameters for winning family are regarded as the indicators of significance in some papers. That is, a mean parameter with the posterior probability bigger than 0.95 in group level will be regarded as the significant one.

I would like to know how to calculate the posterior probability of each mean DCM parameter for the winning family in the group level.

First, just to be pedantic on the language – there is no concept of ‘significance’ in Bayesian statistics. There is just the probability of a particular effect or model. That means that a model or effect with 94% probability is worth taking just as seriously as a model or effect with 95% probability. Nonetheless, you are right that it can be helpful to focus discussion on just the most probable effects – e.g. those exceeding 90% or 95% probability.

Once you’ve done your Bayesian model comparison, one option is to just produce a plot of all the connections from the BMA – i.e. don’t do further statistics. That’s because your main result is the model comparison, and by looking at the parameters, you’re just trying to interpret why you got that result. Alternatively, if you do want to compute the probability that each parameter was non-zero, you can do this as follows (this is untested code):

% Get the mean
mu = BMS.DCM.rfx.bma.mEp.A(1,1);

% Get the standard deviation
std = BMS.DCM.rfx.bma.sEp.A(1,1);

% SD -> variance
v = spm_vec(std) .^ 2;


% Probability this is non-zero

Pp = 1 - spm_Ncdf(0,abs(mu),v);


However, note the limitation of this calculation – it does not take into account the estimated covariance between parameters. To do this, you would need to switch your analysis pipeline to the PEB framework - https://en.wikibooks.org/wiki/SPM/Parametric_Empirical_Bayes_(PEB)


Besides, my DCM.B is a 3X3X2 matrix with three brain regions and two experimental effects. What I am interested in is the difference between two experimental effects on the modulatory parameters. That is, the difference between DCM.B(:,:,1) and DCM.B(:,:,2) is my interest.

Do you know how to test the difference between two experimental effects on modulatory parameters for the winning family if we don't use T-test?

That is, how do we know whether the parameters in BMS.DCM.rfx.bma.mEp.B(:,:,1) is significantly different from those in BMS.DCM.rfx.bma.mEp.B(:,:,2)?

I think some kinds of Bayesian test are needed. But I don't know how to do these.

To do this, you need to compute the probability of a difference between the posterior probability of each connection. This is sometimes referred to as a Bayesian contrast. The code is something like this (I have not tested this):

% Get the estimated B-matrix parameters
Ep  = BMS.DCM.rfx.bma.mEp.B;
std = BMS.DCM.rfx.bma.sEp.B;
v   = std .^ 2;
% Prepare expected values and covariance matrix
Ep = spm_vec(Ep);
Cp = diag(spm_vec(v));
% Build a contrast matrix, with the same number of elements as the B matrix
con = zeros(size(BMS.DCM.rfx.bma.mEp.B));
% Define the contrast. E.g. here we’ll compare B-matrix 2 vs B-matrix 1, for all connections. Ensure this sums to zero.
con(:,:,1) = -1;
con(:,:,2) = 1;
con = spm_vec(con);
% Apply the contrast
c = con'*Ep;
v = con'*Cp*con;
Pp = 1 - spm_Ncdf(0,abs(c),v);

Again – this is ignoring covariance between parameters, so going forward I recommend getting to grips with the more recently developed PEB framework.

Kind regards
Peter