Dear Felix and Sebastian,
>
>We have a simple eventrelated Design, as follows:
>
>Task:
>Every 4'th fMRI scan (TR = 3000 ms, scan duration 1650
>ms) the probant have to make a decision with respect
>to a presented stimuli.
>This was done 125 times.
>
>Control:
>The same than Task, but no decission was made.
So it sounds like your 'task' conditions and your 'control'
conditions were run in different blocks. Is it OK just to make a
couple of comments about your design?
There is possibly a bit of a problem with session effects, imperfect
spatial realignment between blocks etc. You may already have have
tried to minimize this by randomizing which block came first, task or
control, across subjects, but obviously this only helps you when
looking at the group data. It would be far preferable to interleave
task and control conditions. The strength of event-related is that
you can even randomize task and control events, but if this is not
possible (e.g. because it would all be too slow if you had to cue the
subject, for every task, as to whether a decision is required) then
it may be better to have an epoch-related design with epochs of
frequent decisions, epochs of frequent non-decisions, and probably
also 'rest' epochs. In this case, a cue at the start of each little
block would tell the subject what to do. Block designs are, after
all, more sensitive.
Is there a psychological reason why the events had to be so far apart
(12 seconds)? This greatly reduces the number of events in the whole
experiment, and from the point of view of measuring the differential
response between two event types it would be much more efficient to
pack in lots of events, with a SOA as short as 1-2 seconds. With
more than one type of event (and probably with 'null events' added),
this is fine. If you are staying with an event-related design you
could even pseudo-randomize, but artificially increase the frequency
with which runs of one or other event occurs to increase your
sensitivity.
Incidentally, is there a reason why the scan itself only occupies
half of your TR? I am sure that the physicists told me that you
should have a scan length which occupies as much of your TR as
possible. Could the TR be made shorter?
>In a first Analysis we want to look only on the
>task-run, just asking witch aereas are activated
>(independly from the decission made).
Note that you are looking at the total activation generated by
everything that is time-locked to the stimulus (expectation, arousal,
attention, sensory effects of the stimulus, decision-related
activity, motor responses...). You would expect to see a lot of
areas active in such a comparison.
>We have done preprocessing:
>realign, time and slice, normalize, smoothing
I think that the usual recommendation is that you should do slice
timing before realigning, but I know that there's been quite a bit of
discussion about it on this list.
>and specifyed our fMRI model as:
>-> specify and estimate a model
>-> No. of sessions 1
>-> interscan interval 3
>-> trials 1
>-> SOA fixed, 4, 0
>-> parametric modulation none
>-> events
>-> hrf with time derivation
>-> filter gaus 4, hrf 32
>-> F contrast
>-> contrast 1 0 0
So presumably the first decision was made at the very time that the
first scan started? And the slice timing procedure used the first
scan acquired as its reference slice?
The idea of using an F contrast would usually be to try to capture
all of the variance which can be explained by the hrf AND its
temporal derivative. This is how you would give yourself the most
latitude in terms of picking up responses of slightly different
latencies. If this is what you want to do then choose a contrast 1 1
0
>When we examine the Result, we get for a uncorrected p
>Value of 0.001 very large activated areas.
Very much as you would expect; see above. Everything related to the
task is showing up here. If you were using visual stimuli you would
expect to see, at the very least, lots of visual areas, superior
parietal cortex bilaterally extending down into the intraparietal
sulcus, bilateral prefrontal areas in an around the region of the
frontal eye fields, and so on.
>Now decreasing the p value at first shrinks the
>activated areas, but then, for a long range (1e-5 ...
>1e-15) the changes are very small, and then, by a step
>of 1e-17 (from 0.6e-16 to 0.5e-16) all activations
>vanihses.
So far this is not all that surprising. The main point is that when
p values are very small, then a large change in signal corresponds to
a very small change in p value (because the area under the curve is
so tiny at the extreme of the bell-shaped curve). So your fairly
uniform peak p values may be concealing quite a big difference in
signal between one cluster and the next.
Also, you are looking at smoothed data. Signal at the edge of a
cluster of active voxels will tend to be reduced by the smoothing
procedure (since they are being averaged with voxels in which there
is no signal). They will be nibbled away by your early threshold
increases. Then there is a group of voxels in the middle of the
cluster whose voxels have all been averaged out by the smoothing, and
therefore the p values for these are very similar. Thus within an
area you might expect all of the voxels to 'vanish' with a relatively
small threshold change.
Even between clusters, if the noise is fairly similar, and the
maximum BOLD signal that can be generated in these areas is fairly
similar, then similar p values are not altogether surprising.
>This is in particular suprising because the last map
>where we get activatios has a wide range of diffrent
>grayscaled (p valued) voxels (and still quite large
>activated areas).
I wouldn't rely on 'grey-scale' to estimate the range of p values.
On the 'glass-brain' view areas where there are lots of voxels on top
of each other tend to look blacker, so I don't think that increasing
blackness equates with increasing significance.
>Is this a numeric problem ? May we have to change numeric resolution
>in matlab ?,
I don't know. I would have to let someone more experienced answer
that question. But from what I have said already I don't think that
it has to be. But perhaps there is a sealing on how high a p value
is recorded within the program. I don't think that it really matters
anyway (see below).
>and: are the small p Values realistic ?
Probably. You are looking at the main effect of everything that is
related to the task. There could indeed be hugely significant
results. Trying to interpret them might be difficult, and clearly
you were aiming ultimately to compare task and control, which is much
more interesting biologically.
I am not sure why you are even interested in the exact p values when
they are so small. If an area shows an effect which is significant
at less than p < 0.01 corrected, then you have an effect which you
can report. An increase in the significance to p < 0.0001 corrected
might make you feel extra confident about your result. Beyond this
level the drop in p value doesn't really tell you anything extra. It
doesn't tell you how big your effect is, and you can't say one area
is 'more active' than another. It does, I guess, tell you that when
you look for differential effects (between task and a well-matched
control) then you have a reasonable chance of picking something up.
But the difference between one incredibly small p value and another
incredibly small p value doesn't seem very interesting.
>Thanks for help in advance !
Ultimately I am afraid that I haven't answered your question, and
perhaps someone who knows SPM99 better than I do can do this. But I
hope that there is something useful for you in these comments.
Best wishes,
Richard.
--
from: Dr Richard Perry,
Clinical Lecturer, Wellcome Department of Cognitive Neurology, Darwin
Building, University College London, Gower Street, London WC1E 6BT.
Tel: 0207 679 2187; e mail: [log in to unmask]
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|