Hi Yury,
I'm forwarding to the list as other people might want to join the discussion.
Vladimir
---------- Forwarded message ----------
From: Yury Petrov <[log in to unmask]>
Date: Wed, Sep 22, 2010 at 4:50 PM
Subject: Re: iterative Bayes learning in MSP
To: Karl Friston <[log in to unmask]>, Christophe Phillips
<[log in to unmask]>, Vladimir Litvak <[log in to unmask]>
Hi Karl,
There are cases where EM algorithm cannot be used, and I believe that
this is one of them. You are trying to estimate both model parameters
and their covariance at the same time. Generally, EM can be used for
this. But because in this case the covariance estimate affects the
prior for the parameters, estimating both iteratively violates the
Bayesian learning rule.
Yury
On Sep 21, 2010, at Sep 21, 2010 | 7:16 AM, Karl Friston wrote:
> Dear Yury and Vladimir,
>
> The source reconstruction (in MSP and all other modes) uses a Gaussian processes model (formulated as ReML),
> with a single (iterated) step. It can be regarded as the M-Step in an EM scheme (see Friston et al 2007) for details.
> Crucially, this does not estimate the prior covariance of sources based on an estimate of sources. It optimizes
> the prior covariance directly from the sample data covariance. The Bayesian perspective comes from the implicit
> hierarchical modelling of the covariance, which makes the covariance component estimation an empirical Bayes
> estimate. It is not subject to bias of the sort Yury describes. Furthermore, it optimizes the evidence (marginal likelihood)
> of the implicit Gaussian process model, which is the same quantity that cross-validation tries to approximate.
>
> I hope this helps.
>
> With very best wishes,
>
> Karl
>
> PS If any of this answer is obscure it might be a good idea to familiarize yourself with the deep relationship between hierarchical
> models and empirical Bayes. It is this relationship which resolves Yury's concerns.
>
>
>
>
>
>
>
>>> From: Yury Petrov <[log in to unmask]>
>>> Date: 20 September 2010 23:47:53 GMT+01:00
>>> To: Vladimir Litvak < [log in to unmask]>
>>> Subject: Re: iterative Bayes learning in MSP
>>>
>>> Hi Vladimir,
>>>
>>> On the contrary, as far as my concern goes, the present implementation is no different from what was described in the early papers. This is because the pitfall is of a very general nature. To put it simply, any Bayesian inference should be done in one step (given that all the available data is used at once). The only justification to repeat the inference is to get additional data. Which is not the case with MSP or any of your earlier schemes based on your implementation of ReML. In this implementation unknown source covariance and source mean are learned from the same data in repeated Bayesian learning steps. Each of these but the very first step amounts to manufacturing fictitious data. It's a bit like pulling yourself out of a bog by shoestrings. The end result is that the solution will be unstable to data noise. Other source localization algorithms having the same pitfall are FOCUSS, CLARA, and sSLOFO.
>>>
>>> Best,
>>> Yury
>>>
>>> On Sep 19, 2010, at Sep 19, 2010 | 3:35 PM, Vladimir Litvak wrote:
>>>
>>>> Hi Yury,
>>>>
>>>> I'm afraid my expertise does not go deep enough to answer this. The
>>>> only thing I can say is that I'm quite sure that the present
>>>> implementation in SPM8 is different from what was described in
>>>> Phillips et al. although the principles might be similar. I'm CCing
>>>> this to the list without the attachment (perhaps that was the reason
>>>> why you couldn't post it) and I hope one of the EM experts will
>>>> comment.
>>>>
>>>> Best,
>>>>
>>>> Vladimir
>>>>
>>>> On Tue, Sep 7, 2010 at 5:13 PM, Yury Petrov <[log in to unmask]> wrote:
>>>>> Vladimir, I emailed the following message to the SPM mail list ([log in to unmask]) almost two weeks ago, and I don't believe it was posted. Do you have any comments?
>>>>>
>>>>> ----------------------------------
>>>>> Dear All,
>>>>>
>>>>> I have a conceptual concern regarding the MSP algorithm used by SPM8 to localize sources of EEG/MEG activity. The algorithm is based, in part, on EM iterative scheme used to estimate source priors (source covariance matrix) from the measurements. The way this scheme is described in the Phillips et al. 2002 paper, it works as an iterative Bayesian estimator: first it estimates the sources, then calculates the resulting source covariance from the estimate, next it (effectively) uses it as the new prior for the sources, estimates the sources again, etc. However, applying Bayesian learning iteratively is a common pitfall and should not be used, because each such iteration amounts to introducing new fictitious data. I attached a nice introductory paper illustrating the pitfall on page 1426. In particular, the outcome of the iterations may become biased toward the original source covariance used. In my test application of the described EM algorithm I found that scaling the original source covariance matrix changes the resulting sources estimate, which, in principle, should not happen. For comparison, this problem does not occur, when the source covariance parameters are learned using ordinary or general cross-validation (OCV or GCV).
>>>>>
>>>>> Best,
>>>>> Yury
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>
|