Print

Print


Hi Joe, Federico, Cathy,

I've been thinking and talking about this over the last week, and it seems more
and more clear to me that conjunctions are not the correct test for deciding
which areas are activated (for example) in common by task A AND task B AND task
C.  I hope you don't mind if I work through my reasoning.

Forgive me Joe if I quote your reply a little, as I think it brings up some
crucial points.

--- a clipped bit of Joe's reply -------

In practice, however, [the problem of negative t value thresholds] is
not too likely to come up because most studies use small n's and
higher significance thresholds (p<0.001 uncorrected, at least).  In my
opinion, I'd only report the results of such an analysis as consistent
activation across subjects if the minimum t value was positive -- ie,
I'd be more conservative than the statistical test.

[Where conjunctions] are useful is comparing several different tasks
which presumably share one or more cognitive component of interest, as
they were originally described.

-----------------------------------

Thanks a lot for this reply, which prompted me to try and clarify my thoughts.
This was helped by an email from Richard Perry, who pointed out the very
pertinent discussion on the list, from January this year.

In summary, it seems to me that it is in fact exactly the use of conjunctions
to look for similarity across tasks that is the problem, rather than the
comparison across subjects.  This was pretty clearly pointed out by the
original email in the thread in January, by Pierre Fonlupt:

http://www.jiscmail.ac.uk/cgi-bin/wa.exe?A2=ind0012&L=spm&P=R9188

So, the problem is with the idea of a conjunction as a logical AND.
This is the sense in which it is usually used when comparing across
subtractions in the same subject.  That is to say, if I look at
subtractions A B C, and the conjunction of the three is significant,
the conclusion is usually that all three subtractions are causing
increases in signal (activation).  The null hypothesis for the AND
analysis is that there is no activation in AT LEAST ONE of the
comparisons. The problem is that, as Karl and others have noted, the
null hypothesis for the minumum t statistic (which is used for
conjunctions in SPM99), is that there is no activation in ANY of the
comparisons:

http://www.jiscmail.ac.uk/cgi-bin/wa.exe?A2=ind0101&L=spm&P=R30201&m=3596

If you use the conjunction as a test of AND, this means that if, in
fact, only one of the subtractions causes activation, then you will
have a (large) false positive rate (for the test of AND) from the
conjunction of the three.  This is true even if all three of the
subtractions have a positive t statistic.

The reason for this is that the t statistics do not represent the
ACTUAL activation, but the ESTIMATION of the activation, with
noise. Because of the noise, there is a chance that all three of the t
statistics for A,B,C are positive, even when, in reality, there is no
activation. If one of the subtractions activates, this chance
increases, because it more likely that two subtractions will be
positive by chance, than that three subtractions will be positive by
chance.  I've put a worked example at the end of the email which
explains this more clearly.

So, I don't think the probability for a conjuction can sensibly be
used as a test of AND, even if all of the subtractions have a positive
t statistic. Indeed, it seems to me there is a reasonable consensus on
this:

Pierre Fonlupt
http://www.jiscmail.ac.uk/cgi-bin/wa.exe?A2=ind0101&L=spm&P=R7310
Richard Perry:
http://www.jiscmail.ac.uk/cgi-bin/wa.exe?A2=ind0101&L=spm&P=R3106
Jesper Andersson
http://www.jiscmail.ac.uk/cgi-bin/wa.exe?A2=ind0101&L=spm&D=0&P=7501

Sorry about the long email...

See you,

Matthew

Worked example
--------------

Let's imagine that there is no ACTUAL activation in any of the three
subtractions mentioned above, A,B,and C.  Let's take an example set of
resel estimations for our brain volume, for the multiple comparision
correction (in fact from a PET analysis I had to hand): R = [1.0000
26.7179 180.5031 325.0859]; and the degrees of freedom for the t
statistics for the subtractions: df = [1 67]; Of course the number of
subtractions is 3: nsubs = 3.  The minimum t statistic that gives a
corrected p value of 0.05 (for height) is 2.32
(spm_P(1,0,2.32,df,'T',R,nsubs) = 0.05).  That is to say, taking into
account the multiple comparisions, there is a 5% chance that any voxel
will have a mimimum t statistic, across the three comparisons, as high
as 2.32.  So, if you have a minimum t statistic of, say, 2, then you
can not conclude that there is any ACTUAL activation, because this
could fairly easily (29% of the time) have come about by chance.

Now, let's imagine that subtraction A activates, and activates to a
sufficient degree that subtraction A makes a neglible contribution to
the minumum t statistic.  Thus the effective number of subtractions
contributing to the minumum t statstic is 2: nsubs = 2.  Now, your
chance of finding a minumum t statistic of 2.32, in one of more voxels
by chance, even if the there is no ACTUAL activation in subtractions B
or C, is gven by: spm_P(1,0,2.32,df,'T',R,nsubs) = 0.76 - i.e 76%.

So, if you took a minumum t of 2.32 as evidence of ACTUAL activation
of A AND B AND C, then you will find a voxel meeting that criterion
somewhere in the brain in 76% percent of your analyses, by chance,
even if subtraction A is the only one to activate.

Of course that isn't quite true, as the increase to 75% would require
subtraction A to be actually activating in all voxels, but a) the
logic remains, even if A activates a smaller area, and b) if B or C
activate a reasonable (but not overlapping) area, this also adds to
the problem.