Print

Print


Dear Uta,

I'm CCing this to the list as I think my answers are not the only  
possible ones and maybe other people will have other ideas.


On 25 Nov 2009, at 15:51, Uta Noppeney wrote:

> Basically, in this experiment we look at steady state responses to  
> visual, auditory and audiovisual stimuli.
> V = luminance modulation at 8.5714 Hz
> A = amplitude modulated sinewave at 19 HZ
> AV = A+V combined
> trial duration in each condition = 120 s

That's my first generic remark. SPM was not designed to handle such  
long trials. For instance it usually assumes that a single trial  
should easily fit in the memory, hence perhaps the memory errors you  
are getting. However, what I don't see is how using so long trials is  
advantageous for you. Your data is basically steady state so why not  
epoch it into shorter trials, lets say 1 sec long and then work with  
those. That'll save you a whole lot of problems.

>
> The basic idea is too look at comodulation terms in the AV  
> conditions at n*8.5714 Hz + m*19Hz ideally in source space
>
> Now, we've run into the following questions and troubles:
> 1. general 'out of memory' errors - I guess our IT people need to  
> look after that?

See above.

> 2. How should we do artefact rejection? Given that we've got so long  
> trials, we don't want to remove entire trials, because of eye blinks  
> - so I was wondering about using ICA from EEG lab and then  
> reconstruct the data after having removed the components that  
> correlate with the EOG channels - What would you suggest?

There are many possible answers here. The simplest one is: if you  
follow my advice and divide your data into short trials you can easily  
reject some of them and you won't lose much of your data. I wouldn't  
get into ICA. I don't remember what MEG system you have in Tuebingen.  
The biggest problem with MEG and ICA is that due to head movements MEG  
violates the basic ICA assumption of the mixing matrix being fixed  
across time. People who work with Neuromag particularly Jason Taylor  
from Rik's group say they can compensate for that if the apply  
Neuromag's MaxFilter first, but for other MEG systems there is no such  
solution. I've never applied ICA to MEG myself but I wasted years of  
my life trying to apply it to EEG and from my experience this is a  
method which runs ages, requires a lot of subjective judgement calls  
and never does what you expect of it to do. The only reasonable way to  
use ICA is to run it without any expectations and then tell stories  
about whatever results you get. But if you actually have an aim or a  
hypothesis you should use something else. Many people use it just for  
artefact correction but usually when it works well  for that there are  
also much simpler and more efficient methods that work as well. One of  
them is implemented in SPM (in MEEGTools) and has already been  
successfully used by Debbie Talmi and Laurence Hunt. I can give you  
some details, but I don't think you really need it in your case. You  
should just reject the blink segments.  A third answer is that if you  
use something like beamformer (or even MSP) perhaps you shouldn't  
worry too much about it as long as you are not expecting sources in  
the orbitofrontal cortex. If you get some activations around the eyes  
in you source reconstruction you can just say these are probably  
eyeblinks and ignore them.

> 3. Would you apply any filtering - I thought rather no, since we are  
> anyway focusing on frequency space and we don't want to introduce  
> any filter artefacts?

Since you are expecting your effects in some very specific bands I'd  
bandpass filter the data for each those bands separately prior to  
doing source reconstruction as that will focus your analysis  
specifically on those bands. Otherwise your source reconstruction  
might become dominated by things you are not interested if they are  
more prominent features of the data. If you are worried about edge  
artefacts you can filter your continuous data and throw out the first  
and the last trial.

> 4. We'd like to do the FFT analysis in source space - shall we  
> localize the 120 s trials, each trial separately? Would that work  
> for MSP or would SPM run out of memory??

My suggestion is:

1) Convert the continuous data.
2) Filter for each band separately (I'd make it slightly wider than  
the exact frequency of course).
3) Epoch into short trials and reject the bad ones. You can epoch  
unfiltered data, detect bad trials and then mark the same trials as  
bad in your filtered data sets. What will be left is a file with many  
short trials coming from different long trials of your original  
experiment. That's what you source reconstruct. You can try either MSP  
or beamfomers (LCMV or DICS).


> 5. How would we do a FFT in source space? We could find only the  
> wavelet transform for time frequency analysis - but since we are  
> dealing with stationary signals, I think we should just go for a  
> simple FFT - can we do this in SPM? Or would you just go back to  
> simple matlab functions ...
>

If you bandpass filter prior to source reconstruction there is no need  
to do FFT afterwards. Just make images and take them to the 2nd level.  
The other way around will be much more complicated and cumbersome  
although can also be done. There is a function in Fieldtrip for doing  
spectral analysis of stationary signals (ft_freqanalysis with 'mtmfft'  
option).

Best,

Vladimir