Dear Ciara,
> It seems to me from both of these analyses that, in the Primed condition,
> the sparse model is the clear winner over the other 3 contenders, while in
> the WM condition, both the compact and sparse model are an improvement
> over the other two, but there's no clear difference between them (as the
> difference in their Bayes Factors is less than 3). Am I right so far?
Yes, this is entirely correct.
> Now, I understand that it's probably not appropriate to directly compare
> the results of the analyses from the two different conditions, ...
Precisely - we cannot compare model evidences across conditions. The reason
is that the dependent variable (i.e., the F contrast you specified) is
different. Comparing model evidences across conditions
would be equivalent to saying: my model explains these data quite well, your
model explains those data quite well, which model is better? The question is
simply ill-defined.
Having said this, it is worth noting that a decoding model, such as
the one underlying MVB, does allow us to ask which regions should be
included in a model. This is because the dependent variable remains the
same. This is in contrast to encoding models, such as DCM, where regional
time series form the dependent variable, and in which we therefore cannot
use Bayesian model selection to decide which regions should be included.
> ... but would I be justified in saying that there is stronger evidence
> here for sparse coding under the Priming conditions than under the WM
> conditions ?
Yes, this would be the exact conclusion that your results support. Since you
have already carried out a random-effects analysis, this would be the
preferred method to report.
Very best wishes,
Kay
-----Original Message-----
From: Ciara Greene
Sent: Wednesday, March 09, 2011 3:25 PM
To: [log in to unmask] ; Kay Brodersen
Cc: Ciara Greene
Subject: Re: multivariate bayes analysis
Hi, I have some results from an MVB analysis and I wanted to check whether
I'm interpreting them correctly. I compared the 4 models listed in the GUI
(compact, sparse, smooth and support) across 17 subjects in a hippocampal
region using a sphere with a 10mm radius and 8 greedy search steps. I ran
this analysis separately for two experimental conditions (working memory and
priming) and in each case I defined an F contrast to test for activation
during the relevant condition. From the resulting MVB.mat files, I computed
the model evidence as max(MVB.M.F) - MVB.M.F(1), or in other words the
maximum F value relative to the null model. I hope that's correct - I
basically lifted it straight from the spm_mvb_bmc.m script.
Using fixed effects analysis (i.e. summing the model evidences for each
model) I get the following results.
WM condition: compact = 20.8961; sparse = 21.1689; smooth = 7.0270; support
= 5.1632
Priming condition: compact = 30.2204; sparse = 39.8365; smooth = 11.9811;
support = 14.1075
Using random effects analysis (spm_bms function) I get these results:
WM condition: compact = 8.4255; sparse = 8.5376; smooth = 2.0087; support =
2.0282
Priming condition: compact = 2.4542; sparse =14.5932; smooth = 1.8210;
support = 2.1316
It seems to me from both of these analyses that, in the Primed condition,
the sparse model is the clear winner over the other 3 contenders, while in
the WM condition, both the compact and sparse model are an improvement over
the other two, but there's no clear difference between them (as the
difference in their Bayes Factors is less than 3). Am I right so far?
Now, I understand that it's probably not appropriate to directly compare the
results of the analyses from the two different conditions, but would I be
justified in saying that there is stronger evidence here for sparse coding
under the Priming conditions than under the WM conditions ?
My other question relates to the meaning of the model priors. In the
spm_mvb_u script, a brief description is provided after the name of each
prior, but I don’t understand them. For example, ‘patterns are global
singular vectors’ or ‘patterns are local Gaussian kernels’ doesn’t really
mean much to me. Is there a document anywhere explaining what these mean?
Thanks for any help!
Ciara
|