Dear Sam,
Please see inline responses...
> I'm checking to see if what I did was okay. I have fMRI data from over
> 60 subjects (half patients, half controls) and am working with a
> somewhat 'high level' network (i.e. no primary sensory regions). I had
> questions about (a) the structure of the network itself (the nodes i
> knew but didn't know the exact adjacency matrix or driving input
> location), (b) location of task modulation of the network and (c)
> differences in any of this (both model structure and parameters)
> between patients and controls. I addressed the first two questions by
> performing my analysis in two BMS stages:
>
> First I wanted to figure out the structure of the network itself (with
> no modulations), and specified 16 models to represent my hypotheses
> about the adjacency matrix and driving input location(s). Some of
> these models I felt pretty strongly about, but others I included to
> test unexpected possibilities, or as we say colloquially, "cover my
> ass". Fortunately the same model won for both patients and controls
> using RFX BMS, i.e. had highest expected and exceedance probabilities.
> And it made neuroanatomical sense too.
>
> Because the same model won in both groups I took that winning model as
> backdrop, so to speak, and specified a couple dozen permutations of
> task-modulatory hypotheses. Again, I had strong feelings about some
> but not all of these ("feelings" derived from a priori anatomical
> knowledge and the literature). The top 3 winning models (again using
> RFX BMS) were the same in both groups, although their ordering (who
> won) was different.
>
> Ok, my questions:
>
> In general, is it suboptimal or just wrong to specify models in two
> stages like this? More specifically, should I have specified and
> estimated everything at once and then looked further at families to
> determine my best adjacency matrix and driving input? Obviously the
> way I did it saved me from having to specify and estimate another
> 21,000+ models.
This is ok, and particularly helpful if your DCM model space is very
large (as in your case). The first step would approximate the best
useful circuitry (driving region(s) + endogenous connections) under
fixed (or absent modulations). The second step then would search for the
optimal context-dependent connectivity parameters (modulated connections).
You notice I used the term "approximate" because, ideally, these
questions can be investigated at once in the same model space, because
all parameters are conditional to the predefined models.
> Was I wrong in 'covering my ass' by including models that I wasn't
> super confident would actually win? Only including models I was super
> confident in would've limited my overall model space to one in which
> specifying all models at once may have been feasible. However, I
> would've wondered about the other, untested possibilities a lot more.
>
Sampling a large space of models can in principle increase your believe
about a particular set of models because they have been compared against
a large number of alternative explanations of the same data. On the
other hand, you may end up with some "unexpected" models with higher
posterior probabilities. This is why it is better to limit (if you can)
your model space to "plausible" models and avoid including models that
you know they don't make "any sense" (e.g. on the basis of previous
studies).
Having said that, if random-effect BMS analysis revealed other winning
models, then this could be interesting and may motivate further
investigations, in particular if these unexpected (novel?) models are
anatomically and clinically meaningful.
> Finally, can I look at parameter differences between groups if the
> final winning models are different? It seems like the BMA option
> allows comparison between models if they're the same winning model or
> between families, but that I can't specify, for example, "the winner
> in controls but model #13 in patients", where model #13 isn't the
> winner of that group but is of course identically specified as the
> winner in controls.
>
If both groups showed different useful models, then this is in itself a
very interesting result: that is to demonstrate a difference at the
system level between patients and controls, and you don't necessarily
have to compare between the parameters. However, if the connectivity
parameters are important for your hypothesis, then you can run BMA over
the SAME set of models (e.g. one family or the whole model space) and
compare the parameters between patients and controls. You should
obviously expect some differences between the connectivity parameters in
this context because the models that dominate the Bayesian averaging
vary between patients and controls. I hope this makes sense...
For more details about DCM, and DCM in patients, see review in:
http://www.ncbi.nlm.nih.gov/pubmed/20838471
I hope this helps,
Good luck,
Mohamed
|