Dear Yuri,
My paper from 2002 is pretty old and was really just a first attempt at
automatizing constraints selection/weighting.
You would find more up-to-date explanations about multiple constraints
slection and MSP in the following references:
Phillips et al., NI, 2005 ; Mattout et al., NI, 2006 ; Friston et al,
NI, 2008.
Best,
Chris
===================================================
Christophe Phillips, Ir, PhD
FRS-FNRS. Research Associate
Assistant professor in applied sciences
Cyclotron Research Centre, B30
University of Liege, Sart Tilman
4000 Liege, Belgium
Tel: +32 4 366 2316 (secr.)
+32 4 366 2366
Fax: +32 4 366 2946
email: [log in to unmask]
===================================================
Le 19/09/2010 21:35, Vladimir Litvak a écrit :
> Hi Yury,
>
> I'm afraid my expertise does not go deep enough to answer this. The
> only thing I can say is that I'm quite sure that the present
> implementation in SPM8 is different from what was described in
> Phillips et al. although the principles might be similar. I'm CCing
> this to the list without the attachment (perhaps that was the reason
> why you couldn't post it) and I hope one of the EM experts will
> comment.
>
> Best,
>
> Vladimir
>
> On Tue, Sep 7, 2010 at 5:13 PM, Yury Petrov<[log in to unmask]> wrote:
>> Vladimir, I emailed the following message to the SPM mail list ([log in to unmask]) almost two weeks ago, and I don't believe it was posted. Do you have any comments?
>>
>> ----------------------------------
>> Dear All,
>>
>> I have a conceptual concern regarding the MSP algorithm used by SPM8 to localize sources of EEG/MEG activity. The algorithm is based, in part, on EM iterative scheme used to estimate source priors (source covariance matrix) from the measurements. The way this scheme is described in the Phillips et al. 2002 paper, it works as an iterative Bayesian estimator: first it estimates the sources, then calculates the resulting source covariance from the estimate, next it (effectively) uses it as the new prior for the sources, estimates the sources again, etc. However, applying Bayesian learning iteratively is a common pitfall and should not be used, because each such iteration amounts to introducing new fictitious data. I attached a nice introductory paper illustrating the pitfall on page 1426. In particular, the outcome of the iterations may become biased toward the original source covariance used. In my test application of the described EM algorithm I found that scaling the original source covariance matrix changes the resulting sources estimate, which, in principle, should not happen. For comparison, this problem does not occur, when the source covariance parameters are learned using ordinary or general cross-validation (OCV or GCV).
>>
>> Best,
>> Yury
>>
>>
>>
>>
>>
>>
|