Print

Print


Hi, I have some results from an MVB analysis and I wanted to check whether I'm interpreting them correctly. I compared the 4 models listed in the GUI (compact, sparse, smooth and support) across 17 subjects in a hippocampal region using a sphere with a 10mm radius and 8 greedy search steps. I ran this analysis separately for two experimental conditions (working memory and priming) and in each case I defined an F contrast to test for activation during the relevant condition. From the resulting MVB.mat files, I computed the model evidence as max(MVB.M.F) - MVB.M.F(1), or in other words the maximum F value relative to the null model. I hope that's correct - I basically lifted it straight from the spm_mvb_bmc.m script.

Using fixed effects analysis (i.e. summing the model evidences for each model) I get the following results.

WM condition: compact = 20.8961; sparse = 21.1689; smooth = 7.0270; support = 5.1632
Priming condition: compact = 30.2204; sparse = 39.8365; smooth = 11.9811; support = 14.1075

Using random effects analysis (spm_bms function) I get these results:

WM condition: compact = 8.4255; sparse = 8.5376; smooth = 2.0087; support = 2.0282
Priming condition: compact = 2.4542; sparse =14.5932; smooth = 1.8210; support = 2.1316

It seems to me from both of these analyses that, in the Primed condition, the sparse model is the clear winner over the other 3 contenders, while in the WM condition, both the compact and sparse model are an improvement over the other two, but there's no clear difference between them (as the difference in their Bayes Factors is less than 3). Am I right so far?

Now, I understand that it's probably not appropriate to directly compare the results of the analyses from the two different conditions, but would I be justified in saying that there is stronger evidence here for sparse coding under the Priming conditions than under the WM conditions ?
My other question relates to the meaning of the model priors. In the spm_mvb_u script, a brief description is provided after the name of each prior, but I don’t understand them. For example, ‘patterns are global singular vectors’ or ‘patterns are local Gaussian kernels’ doesn’t really mean much to me. Is there a document anywhere explaining what these mean?

Thanks for any help!
Ciara