Hi Will
Sorry to be a slow replier on this.
See my responses below. Basically, it seems to me that the definition of
within and outside the brain voxels in the first step when determining the
grand mean scaling may be faulty as also noticed by Tom Nichols. This is a
problem because it is assumed that the baseline for everyone is roughly
100 or else some people can count twicwe as much in grouping during random
effects (if one subject had a baseline of 100 and incrweased 2 during a
task of interest then it would count exactly the same as someone weho had
a baseline of 300 and increased by 2 during the task of interest).
See below my responses in capital lettering
Thanks
jeff
On Tue, 6 Jan 2004, Will Penny wrote:
> Jeffrey P Lorberbaum wrote:
>
> > Dear Will and George:
> >
> > In my search through my image volumes which were spatially normalized tot
> > he epi.img twemplate (lots of out of brain voxels), almost all subjects
> > had a beta0 image volume with mean within the brain voxels above 100
> > (actually in some people it was 130 and
> > in others 300).
>
>
> Hi Jeff,
>
> I've talked about this issue further with colleagues and the reasons for your observations are
> as follows:
>
> 1. spm_global estimates the 'global' activation in scan i using the following
> method. It computes the mean in the image, call it m. It then computes
> the mean of all voxels above m/8. Call this g. Basically the first step
> just removes outliers or non-brain voxels.
>
MY UNDERSTANDING (AS WELL AS HAS BEEN REPORTED ON TOM NICHOLS
WEBSITE) IS THAT THE M/8 CUTOFF WAS BASED ON TIGHTLY CROPPED TEMPLATES AND
THIS CUTOFF DOES NOT WORK NECESSARILY WELL FOR IMAGE VOLUMES
NORMALIZED TO THE NOT
TIGHTLY CROPPED EPI.MNC TEMPLATE.
THUS, THE BRAIN VERSUS NON-BRAIN CUTOFF COULD BE OFF.
> 2. spm_fmri_spm_ui gets g(i)- the value of g for the ith scan (line 382)
>
> 3. If we're doing Grand Mean scaling rather than proportional scaling
> we compute the grand mean for that session as simply the mean over g(i). The
> scaling factor for that session, gSF, is then set to 100/mean(g(i)) (line 394).
> This scaling factor is then applied to the images.
>
> 4. Now, here's the key. Beginning at line 411 we now define a mask for 'analysed voxels'.
> The threshold for this is xM.TH(i)=0.8*gSF*g(i) for each scan i. SPM only analyses voxels that have
> a value greater than this threshold for every scan i. The beta0 images will only contain non-NaN
> values for these voxels. Let's call the mean of these 'analysed voxels' h.
>
> Because of this (second) thresholding step h will not be equal to g. It will be larger. The minimum
> of h will be about 80 (because of the 0.8 in the second thresholding step) and its mean will be
> above 100. It's therefore not suprising that you observe values of h of about 130 or 300.
I STILL DO NOT UNDERSTAND WHY THERE HAS TO BE TWO DIFFERENT DETERMINATIONS
OF WHAT IS IN BRAIN AND WHAT IS NOT WITHIN BRAIN
> This all seems very confusing I know !
>
> To summarise, basically there are two thresholds
> (1) one is used for robust estimation of the global effect g.
> (2) the second is used to set up a mask which defines those voxels that will be analysed - subject
> to further masking.
>
> To determine these thresholds we rely on the use of two arbitrary quantities
>
> (1) the factor f1=1/8 used in m/8
> (2) the factor f2=0.8 used later
>
> The second factor can be changed by altering defaults.mask.thresh which is defined in
> the defaults.m file. If you could set this to m/8 so that f2=f1 then h=g and your perceived problems
> would disappear ! Just reducing it should make h closer to g (but SPM will analyse more voxels).
THANSK
WHAT i HAVE DONE IS:
(1) HIGH PASS FILTERED THE IMAGES PRIOR TO STATS USING A PROGRAM BY JEO
KOOLA THAT WORKS EXACTLY THE SAME AS THE SPM FILTER.
(2) USING THE MEAN IOMAGE OF THE ENTIRE FILTERED SERIES, I LOOK AT THE
HISTOGRAM OF INTENSITY (X-AXIS) VERSUS NUMBER OF VOXELS
(Y-AXIS) AND EYEBALLED THE MODE INTENSITY OF THE HIGHER INTENSITY VOXELS
O(IE. THE PEAK OF THE MEAN DISTRIBUTION). TOM NICHOLS HAS A PROGRAM FOR
DETERMINING THIS VALUE ROUGHLY AS WELL. I DID THIS IN MEDX
(3) JEJO KOOLA THEN HELPED ME MAKE A PROGRAM TO GRAND MEAN SCALE ALL
VALUES IN THE FILTERED IMAGES. THAT IS, EACH VOXEL WAS MULTIPLIED BY 100 /
WITHINBRAIN MODE. THIS MAKES THE MEAN WITHINBRAIN INTENSITY 100
(4) I THEN WENT INTO THE SPM STATS CODE AND EFFECTIVELY IGNORED THE GRAND
MEAN SCALING STEP
IE. gsf = 100/100 INSTEAD OF 100/mean(g(i))
I ALSO CHANGED line 411 xM.TH(i)=0.8*gSF*g(i) TO 0.5*gSF*g(i) WHICH FOR
MY DATASET WAS INCLUSIVE OF WHAT I THOUGHT CORRESPONDED TO BRAIN AND
INCLUDED MINIMUM NON-BRAIN BASED ON HISTOGRAMS (THE TROUGH BETWEEN LOW
INTENSITY OUT OF BRAIN VOXELS AND HIGH INTENSITY WITHIN BRAIN VOXELS)
(5) i THEN RAN SPM STATS USING NO GRAND MEAN SCALING, NO HIGH PASS
FILTERING.
(6) I THEN CHECKED TO SEE THAT BO'S INTENSITY VS,. # OF VOXEL HISTOGRAM
HAD A MODE INTESNITY OF 1200 WHICH IT DID
tHANKS
JEFF
>
Yours, > > Will. >
> > Tom Nichols also noticed this problem and has a solutioon
> > on his website that was helpful theoretically but I was unable to
> > integrate it into spm2. Other people e-mailed me having the same problem.
> > Some recommended going back to spatially normalize the images to a
> > template that did not have so much out side the brain voxels. SPM
> > developed their code for determining what is in and outside the brain
> > (which the beta0 scaling relies on) using tightly crpped image (little
> > outside the brain space)
> >
> > Our solution here (with the help of Jejo Koola's programming) was to first
> >
> > (1) high pass filter all image volumes using the same scheme as spm2
> > generating a time series of the filtered images as well as a mean image
> > volume of this time series.
> >
> > (2) Then, on the mean filtered image volume, I used a histogram of x
> > (intensity) versus y (# of voxels showing particular intensity) to find
> > the mode intensity of within-brain voxels
> > (ie. the peak of the distribution showing within brain voxles). We did
> > this in medx. Tom Nichols has a program to calculate this as well
> >
> > (3) We then grand mean scaled the image volumes by 100 / this
> > within brain mode intensity value- that is, at each time pint, each
> > voxel in the brain was multiplied by 100/ mode.
> >
> > This created a time series of grand mean scalesd images for a subject
> > with a baseline intestiy of 100 in what we determined were within
> > brain voxels.
> >
> > (4) Put the grand mean scaled, high pass filtered image volume series
> > through fMRI stats --> design, data, estimate, etc without high pass
> > filtering the data and essentially turning off the grand mean scaling step
> > from within the spm_fMRI_ui.m file I think containing the fMRI design
> > code.
> >
> > The estimation step always produced beta0 image volumes
> > with a within-brain mode of 100 in all 50 subjects looked at to date.
> >
> > The temporal filter from step (1) appears to look accurate and is correct.
> > That is, in several subjects I looked at the beta0 image volume for spm2's
> > way of doing things with filtering the data during stats versus
> > our added steps. The spm2 regular way produced a beta0
> > image volume that was simply some multiple of the one we got (ie. beta0 in
> > our approach = some constant* beta0 in spm2's regular approach. That
> > is, this constant was the same for each voxel in beta 0 - however, the
> > constant was different for each subhject as that was the problem with
> > the spm2 scaling in the first place--> different weightings for different
> > subjects.
> > If the filter did not work then I would not expect this to be true.
> >
> > If you are doing indivudal subject analyses and not grouping subjects then
> > none of this is relevant. However, if you are grouping data across
> > subjects thwen this is all relevant.
> >
> > A possible alternative and easier way to do this scaling would be to
> > determine the mode intensity of within brain voxels for a subject and
> > then scale all beta image volumes by ( 100 / scaling factor intensity).
> > If you are using each subjects residual variances as well then you need to
> > making sure they are multiplied by a scaling factor as well (I think it is
> > (100/ scaling factor) squared).
> >
> > We did not do things this easier way because we want to easily average
> > time curves across subjects from our grand mean scaled, filtered image
> > volumes and compare the time curves between groups
> >
> > Please let me know if anyone notices any errors in the above scheme(s).
> >
> > Thanks
> > Jeff Lorberbaum
> >
> > On Fri, 19
> > Dec 2003, Will Penny wrote:
> >
> >
> >>George Tourtellot wrote:
> >>
> >>
> >>>Hello,
> >>>question 1)
> >>>in the first-level analysis, if I want to divide my beta values for each voxel by the
> >>>voxel's mean, is there an easy way to do this? (am I missing some existing
> >>>functionality, since lot's of talk is devote to global normalization, but they don't say
> >>>how they're doing it?)
> >>>
> >>>
> >>
> >>There were a few mailings on this topic quite recently. See eg.
> >>
> >>http://www.jiscmail.ac.uk/cgi-bin/wa.exe?A2=ind0312&L=spm&P=R349&I=-1, ie. mail
> >>
> >>from Jeffery Lorberbaum and Russ Poldrack's reply.
> >>
> >>Essentially, you should'nt need to rescale the beta's as the data are already rescaled to have a
> >>mean of 100 (over scans and over the whole volume).
> >>This is implemented by first computing the global mean value, g, over all time series, excluding
> >>non-brain voxels, and then scaling each time series by the factor 100/g.
> >>
> >>
> >>
> >>>question 2)
> >>>is there any detailed documentation corresponding to the '-> Bayesian' button in
> >>>SPM2? No options appear, so I don't even know what it expects the SPM.mat file to
> >>>contain.
> >>>I'm very interested in using Bayesian estimation for 2nd level analysis, so please fill
> >>>me in on usage.
> >>>
> >>>
> >>
> >>The best thing to read is K. Friston and W. Penny (2003) Posterior Probability Maps and SPMs.
> >>Neuroimage, 19(3), pages 1240-1249.
> >>
> >>Best wishes,
> >>
> >>
> >>Will.
> >>
> >>
> >>
> >>>thanks,
> >>>George
> >>>
> >>>
> >>>
> >>>
> >>
> >>--
> >>William D. Penny
> >>Wellcome Department of Imaging Neuroscience
> >>University College London
> >>12 Queen Square
> >>London WC1N 3BG
> >>
> >>Tel: 020 7833 7478
> >>FAX: 020 7813 1420
> >>Email: [log in to unmask]
> >>URL: http://www.fil.ion.ucl.ac.uk/~wpenny/
> >>
> >>
> >
> >
>
>
> --
> William D. Penny
> Wellcome Department of Imaging Neuroscience
> University College London
> 12 Queen Square
> London WC1N 3BG
>
> Tel: 020 7833 7478
> FAX: 020 7813 1420
> Email: [log in to unmask]
> URL: http://www.fil.ion.ucl.ac.uk/~wpenny/
>
|