Print

Print


I realise there is a lot of confusion about image orientations in SPM, the
reasons for which are mostly historical.  The first image conversion routines
I wrote at the FIL, for getting data from the scanners into Analyze format,
wrote the data using the proper Analyze orientation - i.e., radiological
orientation or right-means-left.  This is a left-handed co-ordinate system.
Within Talairach space, the co-ordinate system is right-handed, so the
orientation of the images were flipped by default at the spatial
normalisation stage.  Once something like this has been introduced, it is
very difficult to undo.  If I was starting from scratch, I would simply do
a left-right flip of the template images and a minor modification to the
spm_get_space.m routine and things would be simplified greatly.

Positions in space can be represented in either a left- or right-handed
co-ordinate syFrom [log in to unmask] Thu Jun 22 14:01:04 2000
Received: from localhost.localdomain ([log in to unmask] [129.170.30.83]) 
        by naga.mailbase.ac.uk (8.8.x/Mailbase) with ESMTP id OAA22859;
        Thu, 22 Jun 2000 14:01:03 +0100 (BST)
Received: from localhost (petr@localhost)
	by localhost.localdomain (8.9.3/8.9.3) with ESMTP id JAA07385;
	Thu, 22 Jun 2000 09:02:56 -0400
Date: Thu, 22 Jun 2000 09:02:56 -0400 (EDT)
X-Sender: [log in to unmask]
In-Reply-To: <[log in to unmask]>
Message-ID: <[log in to unmask]>
MIME-Version: 1.0
Content-Type: TEXT/PLAIN; charset=US-ASCII
Subject: Re: why does bounding box determine map significance?
From:  Petr Janata <[log in to unmask]>
To:  John Ashburner <[log in to unmask]>
Cc:  [log in to unmask]
X-List: [log in to unmask]
X-Unsub: To leave, send text 'leave spm' to [log in to unmask]
X-List-Unsubscribe: <mailto:[log in to unmask]>
Reply-To: Petr Janata <[log in to unmask]>
Sender: [log in to unmask]
Errors-To: [log in to unmask]
Precedence: list

On Wed, 21 Jun 2000, John Ashburner wrote:

> 
> | 1) This may be a really idiotic question, but how does one view the
> | uncorrected t-statistic images?  I'm assuming that viewing the t-statistic
> | images for a given contrast using the default values: "corrected height
> | threshold = no", "threshold {T or p value} = 0.001", and "extent threshold
> | {voxels} = 0" still applies a correction that is based on the the
> | smoothness estimates and consequently the number of resels.
> 
> This displays the raw uncorrected t statistics that are more significant
> than p<0.001.  There is no correction for the number of resels when you
> dont specify a corrected height threshold.

Dear John,

Thanks for your response.  In your original response to my query about why
I am getting different uncorrected t-statistic images using different
bounding box sizes during the normalization step you wrote,

"The uncorrected t statistic images should be the same. The differences
you see should relate to the effects of correcting for multiple dependent
comparisons.  With a bigger bounding box, you do more tests so each test
needs to pass a higher threshold before being considered significant. A
bigger bounding box may also give you a slightly lower grey matter
threshold, therefore including even more voxels."

Having now established (as per the interchange quoted above) that we are
talking about the same thing when we talk about displaying uncorrected
t-statistic images via the SPM99 interface, the point remains that the
uncorrected t-statistic images differ as a function of the bounding that
is specified during normalization.

The difference in the number of significant voxels in the t-statistic
images plotted as described above appears to be directly related to the
number of resels, and the number of resels is a function of the size of
the bounding box.  When we generated the normalized images, the only
parameter we changed was the bounding box size.  We created the two sets
of normalized images and performed the analyses one after another, so I
don't believe we could have run into the problem of writing two
different sets of normalized images at the same time as you suggested
might be the problem below.  Forgetting for the moment about trying to
normalize into a bounding box that was the same size as our original data,
we used two of the SPM99 bounding box options: the SPM99 default, and
option #4.  Both give differing estimates of the smoothness, and
consequently the number of resels and significant voxels:

Normalized Smaller bounding box (SPM99 Default)
3948 significant voxels
VOL =
       S: 21242
       R: [2 43.5423 508.3141 1.2528e+03]
    FWHM: [2.6842 2.5119 2.1405]

Bounding box option 4
2545 significant voxels
VOL = 
       S: 21788
       R: [2 37.2294 383.8368 832.4568]
    FWHM: [3.1065 2.9114 2.4696]

Is the smoothness estimate from which the number of resels is
derived computed over the entire image or only those parts of
the image that exceed some threshold?

Thanks.
Petr


> | 2) The size of the bounding box strongly influences the smoothness
> | estimates, which I assume are then used to generate the significance maps.
> | 
> | Normalization to the SPM99 Default bounding box results in the following
> | values:
> | 
> | VOL.
> |        S: 21242
> |        R: [2 43.5423 508.3141 1.2528e+03]
> |     FWHM: [2.6842 2.5119 2.1405]
> | 
> | whereas normalization to a bounding box which is the same size as the
> | original volume results in
> | 
> | VOL.
> |        S: 23350
> |        R: [1 22.7378 110.1009 125.8692]
> |     FWHM: [5.9930 5.6235 4.7065]
> | 
> | In both cases, the voxel size is 3.75 x 3.75 x 5
> | 
> | The larger bounding box has a smoothness estimate that is twice as large
> | as the smaller bounding box, and an order of magnitude fewer resels.  I
> | assume that this is why the t statistic images differ so much.
> | 
> | It is somewhat puzzling that the smoothness estimate would depend so
> | strongly on the bounding box, unless if the smoothness estimate is derived
> | from a particular spatial frequency bin, which would correspond to a lower
> | spatial frequency given a larger bounding box.
> 
> I have no idea why the smoothness estimates are so different.  Were there any
> other differences in the analyses of the data.  Less good model fits often
> produce more residual smoothness.  Is there anything bizarre in the normalised
> data?  For example, if you have two sets of writing normalised images running
> at the same time, then you can have problems.
> 
> Regards,
> -John
> 





%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%