Dear Panagiotis,
On Wed, Mar 24, 2010 at 2:05 AM, Panagiotis Tsiatsis
<[log in to unmask]> wrote:
> Dear all,
>
> I am currently facing the following efficiency problem:
>
> I am running an MEG experiment (CTF, 275 sensors) of around 1000 trials
> which are spit in 5 sessions, giving 200 trials per session. Each trial
> lasts 1 second. I downsample the data to 300Hz and then I perform a time
> frequency analysis in the range [1:100]Hz. Afterwards, I need to average the
> data, but the concatenated file (1000 trials) after the time frequency
> analysis has the modest size of 35GB and averaging takes the modest time of
> ~50 hours (that is per subject) on a system with Windows 7 and 8GB of ram.
> Of course the problem lies in the bottleneck in the computer memory as
> virtual memory is employed and most of the processing time is consumed in
> I/O (only one of the processors is used at around 20%).
>
> I know that it is a bit insane to go up to 100Hz and that you would probably
> suggest me to lower the upper frequency limit, to use shorter trials, to
> convert only every second / third channel in TF, to use a lower sampling
> rate (although with 100Hz upper limit, 300Hz sampling rate is pretty
> reasonable) and so on and so forth, but my problem is that I do not have a
> concrete a priori hypothesis and I would like to explore the full space for
> a few subjects before I narrow it down for the rest of the subjects.
>
Actually, the real answer to your problem is to downsample the time
axis after doing time-frequency or use another TF decomposition method
that is more flexible with respect to time-window selection. These
options will be available in the next SPM public release. They are
already in the in-house version so if you want I can send it to you to
beta-test.
> Since my paradigm involves 2 conditions that appear in a probability
> 80%/20%, I would ideally like to perform the analysis on the concatenated TF
> file; but this is overwhelming due to the reasons that I mentioned before.
> So, I wanted to ask the (possibly silly) question of whether I could apply
> the TF analysis and the averaging on each session file separately (around
> 7Gb) and then somehow re-average my conditions across the 5 sessions *in a
> weighted fashion* (possibly by using the contrast functionality?). Would it
> be more reasonable to skip this overall across-sessions and within-subject
> weighted averaging and implicitly consider the different sessions to be
> different "subjects" for my group level analysis?
>
>
What you can do is use 'Grand average' that has a 'weighted' option
which is exactly what you need to average across sessions.
> A further question, a bit out of topic and possibly silly: if somebody is
> interested in induced responses in the time-frequency domain, should (s)he
> apply baseline correction before converting the time-domain data to
> time-frequency domain data? I am also interested in the prestimulus activity
> and I am afraid that in this case I might weaken the effect that I would
> expect to see when comparing across conditions. On the other hand, if I
> don't baseline correct it might be unfair to compare trials across the whole
> duration of the recording which lasts around 90 minutes (or am I mistaken
> here?).
>
Baseline correction in the time domain will only possibly affect your
lowermost frequency bin (if you start from DC), However, having large
DC offsets in the data might possibly 'confuse' some TF methods and
give wrong results. So I'd suggest to baseline-correct. You can take
the whole trial as the baseline so the pre-stimulus time will not be
treated differently.
Best,
Vladimir
> Thank you very much for your time and support,
> Panagiotis
>
|