Thanks Krish. This reminds me of one thing I forgot to mention, namely
that smoothing can be used to deal with inter-subject variability
instead of moving analysis windows around. Smoothing can be done as
part of time-frequency analysis e.g. with multi-tapers (there will be
more options for that in the next SPM release) and also just applied
to TF images with the 'Smooth images' tool. In the very extreme case
such as Krish mentioned - narrowband signals and high variance the
smoothing might have to be too extensive and blur some details you
want to keep, but for not so extreme cases this is also something to
consider.
Vladimir
On Thu, Mar 18, 2010 at 10:28 AM, Krish Singh <[log in to unmask]> wrote:
> Thanks for an excellent answer, Vladimir.
> Just to pick up on one thing. I think there are instances where
> subject-specific frequency ranges are important and will enhance the SNR of
> the analysis. This is in situations where individual variability is
> oscillatory frequency components is known to be relatively large and the
> components themselves are quite narrow-band.
> At present the only one that springs to mind is visually induced gamma
> (which, of course, you would expect me to point out, see our recent work on
> this). Even then, this is probably limited to fairly narrow experimental
> situations where the stimulus is very simple and only contains one
> spatial-frequency.
> Even in these situations we have not quite settled on how to best analyse
> this using beamformers - it is quite a complex set of decisions. We
> typically will analyse each subject's data with relatively wide (e.g.
> 20-40Hz) frequency ranges as we do not know a-priori what the optimum
> frequency will be for that individual (and for weak gamma responses,
> task-induced effects may not always be visible in raw sensor-space data).
> Then, assuming we can see a subject-specific gamma response in visual
> cortex, we can of course re-analyse the data with narrower frequency ranges
> centred on that subject's specific frequency. However, that does not
> necessarily lead to a better source localisation with enhanced SNR - lots of
> factors come in to play, including the fact that the more narrow the
> filters, the less data is included in the beamformer reconstruction. This
> may then lead to poor or erroneous reconstruction (check out Matt's paper
> for a discussion of this: Brookes et al. Optimising experimental design for
> MEG beamformer imaging. NeuroImage (2008) vol. 39 (4) pp. 1788-802.)
> Krish
>
> On 18 Mar 2010, at 09:56, Vladimir Litvak wrote:
>
> Dear Feng,
> My answer to all your questions is 'it depends'. One thing you should
> remember is that anything you do has to be clearly explained in your
> paper, justified during the review and easy for other people to
> reproduce. Therefore, my approach is not to do any processing steps
> that are not essential for getting the results and what I usually do
> is after getting the results that look sensible go back and check
> whether some optional processing steps are really necessary and if not
> remove them. I'd prefer to trade a non-critical degrading of the
> results (e.g. higher p-values) for a more straightforward processing
> stream.
> On Thu, Mar 18, 2010 at 9:18 AM, gao zai <[log in to unmask]> wrote:
>
> I was given the following answer
> Use data without large artifacts such as blinks. Apply the beamformer to
> the segments after rejection of segments with large artifacts such as
> blinks, big saccades, large muscle noise etc., but before ICA correction.
> ICA and other regression methods for artifact correction might distort the
> topography and thus distort the localization.
>
> ICA is quite a heavy processing operation, especially for MEG and does
> basically the same thing as beamforming, excludes signal components
> based on their spatial topography. So I don't see much advantage in
> combining the two. In any case, if you decide to do that you should be
> careful for two reasons. (1) As your colleague correctly noted ICA
> distorts spatial topographies. This can be taken into account if you
> apply ICA via the 'Montage' function of SPM8 (presently only works for
> MEG data). What you should do is run ICA, lets say in EEGLAB, get a
> montage matrix that excludes some artefact components, save it as a
> montage that SPM can use and apply it with spm_eeg_montage. In that
> case the MEG sensor representation will also be updated and source
> localization will still work correctly. If you just apply the montage
> to the data but not to the sensors, your source localization might be
> invalid. (2) Removing ICA components renders the data rank-deficient.
> For SPM imaging source reconstruction this will probably not be a
> problem as it reduces the data dimensionality anyway. But for
> beamforming it may cause problems so you should probably use non-zero
> regularization in that case.
> With that said I strongly advise you not to use ICA just because it's
> there but only if you cannot get your results without it which I
> really doubt.
>
> 1) When do the beamforming, shall I use the clean data or with the data
> without artefact rejection? Recently I make a comparison between the two, it
> seems to me that using the raw data without removing artefacts, I can get
> better results, which is contrast to what I thought before. And we had a
> discussion with the student here sometimes ago, and we thought that doing
> beamforming with raw data is also acceptable.
>
> Beamforming is quite robust to certain kinds of artefacts but not to
> others. For instance, if you have SQUID jumps, you should exclude
> them. In general, the idea is that the data you use to compute the
> filters should contain the activity of the sources you are interested
> in and the things you want to suppress. Adding irrelevant noise will
> either be equivalent to increasing regularization (if the noise is IID
> accross channels) or may degrade your results because the beamformer
> will have to use extra-degrees of freedom to suppress it and this will
> decrease its spatial selectivity. So if you have some trials with
> infrequently occurring artefacts that are not typical for the rest of
> the data I'd exclude them. Otherwise I'd keep them because they allow
> to estimate the artefact topography better. For instance I'd keep the
> eyeblinks even if they are not very frequent because it might be
> helpful for suppressing eye-movement related signals which have
> similar topography.
>
> 2) When do the source localization, we should use the same time range and
> spatial frequency range for all the subjects or choose a time window and
> frequency range depending on each subject's specific condition (around the
> peaks)? We have different views on this issue. Someone said using the first
> way (everything is the same) is optimal for statistics. Yet someone say we
> should use the second to get better localization result and the statistics
> can also be done.
>
> This is again related to what I said at the beginning. I'd only start
> playing with time-frequency windows if you cannot get significant
> results with a fixed window, just because the latter is much easier to
> explain and reproduce. If you want to adjust them individually you
> should devise a clear and reproducible procedure for doing that and be
> able to convince your reviewers that indeed you are looking at the
> same thing in all subjects.
> Your questions are quite interesting for a wide audience so I CC my
> answers to the SPM list and also to some beamforming experts that
> might have corrections of further insights.
> Best,
> Vladimir
>
> --
> Prof. Krish Singh
> CUBRIC
> School of Psychology / Ysgol Seicoleg
> Cardiff University / Prifysgol Caerdydd
> Park Place / Plas y Parc
> Cardiff / Caerdydd
> CF10 3AT, UK
> Tel / Ffôn: 02920 874690 / 870365
> Fax / Ffacs: 02920 870339
> Email / Ebost : [log in to unmask]
>
>
>
|