> NOBODY HAS ANY IDEAS?
Plenty of ideas - just no time to implement them.
> in a VBM analysis I compared 60 psychiatric patients and 70 healthy
> controls, looking for GM volume differences in SPM5/Matlab 7.1 on Windows.
>
> To my slight surprise, GM volume differences appear to be entirely outside
> the brain. In the attached image you see areas of GM volume reduction in
> patients overlayed on an unsmoothed normalised GM segment mwc1*.img. If I
> overlay the results on the SPM5/canonical/single_subj_T1.nii, it's the
> same: GM volume reductions do not at all overlap with the GM. If I overlay
> the results on the mask.img, results are all inside the mask, right at its
> edge.
It's no real surprise to me. The t statistics of VBM are not very precise at
localising differences. A t statistic is proportional to the difference (con
images) divided by the square root of the residual variance (ResSS). Because
there is no residual variance in regions with no signal (i.e. a long way
outside the brain), and lots of residual variance within the brain, then the
maximum of the t statistic will tend to move towards low variance regions.
Fred Bookstein pointed this one out in his article about VBM. I also made the
point in this email....
http://www.jiscmail.ac.uk/cgi-bin/webadmin?A2=ind04&L=SPM&P=R307792&I=-3
...but nobody noticed it (apart from Tom - but he didn't take it any further).
>
> For my analyses I used the smwc1*.img (smoothed with 12x12x12 mm) and the
> following design: PET / two-sample t-test
> Independence: yes
> Variance: unequal
> Grand mean scaling: No
> Ancova: No
> Absolute threshold 0.05
> Implicit mask: yes
> Explicit mask: no
> Global values: user-defined GM-volume integrals (from the smwc1*.img)
> Overall grand mean scaling: No
> Normalisation: proportional scaling
>
>
> I am aware that VBM-GM results may appear to extend GM, see John's reply to
> a similar problem in VBM/SPM2b, 7 May 2003:
> http://www.jiscmail.ac.uk/cgi-bin/webadmin?A2=ind03&L=SPM&P=R166060&I=-3 "*
> There are likely to be some non-brain regions misclassified as GM, which
> could appear outside the glass brain.
> * The spatial normalisation is not 100% exact (cortical surface is
> usually registered within about 1cm).
> * Smoothing will spread signal outside the glass brain."
>
> However, I am puzzled that these GM-results seem to be ENTIRELY outside the
> brain (and only towards the CSF, not towards WM), and do not even overlap a
> bit with the GM in the template or the normalised GM segments. It just
> looks more like a CSF result ;)
>
> I tried to check for obvious errors. But the segmentation seems to have
> gone well for all images (that means, the first reason given by John is
> unlikely in this case if I am not mistaken). Looking with Checkreg at the
> smwc1*.img and mwc1*.img, they seem to be all in the same space,
> orientation, voxel-size etc.
>
> I would be very grateful for your opinion:
> Is this a true result (given John's explanations 2 and 3 above)?
> Or has something gone wrong? If so, any ideas where, when and how ??
Providing you have a balanced design (which you appear to have) then you
should obtain the appropriate rate of false positives. If there are no
significant differences among the pre-processed images, then you should only
see a corrected p value of less than 0.05 in about one out of 20 cases.
Therefore the chances are that you have real differences among your data.
The difficult part is to actually figure out what has caused the differences.
This is easier if the anatomical differences are focal. If the differences
are of a more global nature, then a mass-univariate procedure such as
statistical parametric mapping will not model them especially well and
results would be difficult to interpret. Global shape differences would best
be modelled with a multivariate procedure. Unfortunately, the results of
such multivariate procedures can not be easily communicated.
If you take a look at the contrast image, then you'll see how multivariate the
differences are. For fMRI data, the contrast images are usually pretty
uniform except for a few discrete regions. Often, VBM data produce contrast
images that encode a whole pattern of difference. Only some of this pattern
is reported in the form of a few blobs. These are the bits that survive
after dividing the pattern by the residual standard deviation and
thresholding.
Within science, we use a model until a better one comes along. Multivariate
approaches have been around for a while, but haven't really cought on because
they don't produce a nice table of blobs. The mass-univariate approach
currently dominates because researchers can at least attempt to localise
differences within this framework and these localised differences are easily
communicable (if not necessarily interpretable). A truly multivariate
approach does not attempt such localisation. It just models the differences.
Sometimes these models of difference can be visualised (linear methods), but
not in the form of discrete blobs. Sometimes they can not easily be
visualised (nonlinear methods).
My views may be biased (because I normally only hear about things that don't
work so well), but I am tending towards favouring multivariate approaches
over the mass-univariate ones. They have the potential to characterise any
differences much better. The problem is that they are a bit of a black box,
so understanding the mechanism of how such a model model separates brains of
one group from those of another is not so straightforward.
Best regards,
-John
|