JiscMail Logo
Email discussion lists for the UK Education and Research communities

Help for SPM Archives


SPM Archives

SPM Archives


SPM@JISCMAIL.AC.UK


View:

Message:

[

First

|

Previous

|

Next

|

Last

]

By Topic:

[

First

|

Previous

|

Next

|

Last

]

By Author:

[

First

|

Previous

|

Next

|

Last

]

Font:

Proportional Font

LISTSERV Archives

LISTSERV Archives

SPM Home

SPM Home

SPM  2003

SPM 2003

Options

Subscribe or Unsubscribe

Subscribe or Unsubscribe

Log In

Log In

Get Password

Get Password

Subject:

Re: degrees of freedom in SPM2b

From:

Karl Friston <[log in to unmask]>

Reply-To:

Karl Friston <[log in to unmask]>

Date:

Thu, 13 Mar 2003 20:07:24 +0000

Content-Type:

text/plain

Parts/Attachments:

Parts/Attachments

text/plain (559 lines)

Dear Annabel,

>I have a question regarding how effective degrees of freedom is calculated
>in SPM2b.  I analyzed a single subject using SPM99 and then SPM2b and
>obtained different results.  Mainly the effective degrees of freedom was
>about 5-6 times larger and the cluster maximas are different.

SPM2 uses maximum likelihood estimators that are associated
with higher d.f. than the adjusted d.f. associated with ordinary
least squares estimators. I have included the release notes so
that you can read about this in more detail. (2nd paragraph).

With very best wishes,

Karl


New Functionality
Many new components of SPM2 rely on an Expectation Maximisation (EM)
algorithm that is used to estimate restricted maximum likelihood (ReML)
estimates of various hyperparameters or variance components. This enables a
number of useful things. For example, non-sphericity can be estimated in
single-level observation models and, in hierarchical observation models,
one can adopt a parametric empirical Bayesian (PEB) approach to parameter
estimation and inference. Furthermore, the EM algorithm allows fully
Bayesian estimation for any general linear model under Gaussian
assumptions. Examples of all these applications will be found in SPM2.

Non-sphericity and ML estimation
Sphericity refers to the assumption of identically and independently
distributed errors. Departures from this assumption can either be in terms
of non-identical distributions (e.g. heterogeneity of variance among
conditions or groups, known as heteroscedasticy). Departure from
independence implies correlations amongst the errors that may be induced by
observing different things in the same subject. The two most pertinent
sources of non-sphericity in neuroimaging are heteroscedasticy (e.g. when
entering the coefficients of different basis functions into a second-level
analysis), and serial correlations in fMRI. It is important to estimate
non-sphericity for two reasons: (i) Proper estimates of the co-variances
among the error terms are needed to construct valid statistics. In this
instance non-sphericity enters twice, first in the estimate of the variance
component hyperparameters (e.g. error variance) and second in the
computation of the degrees of freedom. The effective or adjusted degrees of
freedom relies upon the Satterthwaite approximation as described in Worsley
& Friston (1995) and is commonly employed in things like the
Greenhouse-Geisser correction. (ii) The second reason why non-sphericity
estimation is important is that it enables multivariate analyses to be
implemented within a univariate framework. For example, it is entirely
possible to combine PET and fMRI data within the same statistical model
allowing for different error variances and serial correlations between the
two modalities. The parameter estimates that are obtained from a
multivariate analysis and the equivalent univariate analysis are identical.
The only point of departure is at the point of inference. In the univariate
framework this is based upon an F ratio using the Satterthwaite
approximation, whereas multivariate inference uses a slightly different
statistic and distributional approximation (usually one based upon Wilk's
Lambda). The critical thing here is that multivariate analyses can now
proceed within the univariate framework provided by SPM2.

By default SPM2 uses WLS to provide ML estimators based on the
non-sphericity. In this case the weighting 'whitens' the errors rendering
them i.i.d. The effective degrees of freedom then revert to the classical
degrees of freedom and the Satterthwaite approximation becomes exact. It is
possible the use any WLS estimate (SPM2 will automatically compute the
effective degrees of freedom) but this is not provided as an option in the
user interface). This departure from SPM99 has been enables by highly
precise non-sphericity estimates that obtain by pooling over voxels using
ReML.

In addition to finessing the computation of the t statistics and effective
degrees of freedom the non-sphericity estimates are also used to provide
conditional estimates in the Bayesian analysis stream (see below).

User specification
For PET, you are requested to specify whether or not the errors are
identically and independently distributed over the different levels of each
factor. In any experimental design there will be one factor whose different
levels are assumed to be replications. This is necessary in order to pool
the sum of squared residuals over these levels to estimate the error
variance hyperparameter. By default, SPM2 assumes that this factor is the
one with the greatest number of levels. A common use of this non-sphericity
option would be in the second-level analysis of basis function coefficients
from a first-level fMRI experiment. Clearly, the coefficients for different
basis functions will have different scaling and error variance properties
and will not be identically distributed. Furthermore, they may not be
independent because the error terms from any two basis functions maybe
coupled because they were observed in the same subject. Using the
non-sphericity option allows one to take up multiple contrasts from the
same subject to a second level, to emulate a mixed or random effects
analysis. In fMRI the key non-sphericity is attributable to serial
correlations in the fMRI time series. SPM2 assumes that an auto-regressive
process with white noise can model these. Critical to this assumption is
the decomposition of fMRI time-series into three components. Induced
experimental effects embodied in the design matrix, fixed deterministic
drifts in the confounds (that are used in high pass filtering) and,
finally, a stochastic error term that shows short-term correlations
conforming to an AR(1) process. The AR(1) characterisation is only
appropriate when long-term correlations due to drift have been modelled
deterministically through high-pass filtering or, equivalently by inclusion
in the design matrix as nuisance variables.

If non-sphericity is modelled a weighted least squares (WLS) will be used
during estimation to render the residuals i.i.d. This WLS estimator then
becomes a ML estimator, which is the most efficient of linear unbiased
estimators. To over-ride this default WLS simply specify the required
weighting matrix before estimation (this matrix is in SPM.xX.W, see below).

Software implementation
The hyperparameters or coefficients governing the different variance
components of the error co-variances are estimated using ReML. The form of
the variance components is described by covariance basis in the
non-sphericity sub-field of the SPM structure (SPM.xVi.Vi, see below). In
SPM2 a multiple hyperparameter problem is reduced to a single
hyperparameter problem by factorising the non-sphericity into a single
voxel-specific hyperparameter and the remaining hyperparameters that are
invariant over voxels. By reformulating a multiple hyperparameter problem
like this one can employ standard least squares estimators and classical
statistics in the usual way. The voxel invariant hyperparameters are
estimated at voxels that survive the default F test for all effects of
interest (assuming sphericity) during the estimation stage in SPM. This
entails a first pass of the data to identify the voxels that contribute to
the ReML hyperparameter estimation. A second pass then uses the
non-sphericity V (in SPM.xVi.V) to compute the appropriate weighting matrix
(in SPM.xX.W) such that WW' = inv(V). [If W is already specified only a
single estimation pass is required.]

Bayesian estimation and inference
Having performed a weighted least squares estimation one can optionally
revisit all the 'within mask' voxels to obtain conditional or posterior
estimates. Conditional estimates are the most likely parameters estimates
given the data. This is in contrast to the least squares maximum likelihood
estimators that are the estimates that render the data most likely. The
critical distinction between the two estimators relies upon prior
information about the parameters. Effectively, under Gaussian assumptions
and the sorts of hierarchical models used in SPM2, the conditional
estimators are shrinkage estimators. This means that the estimates shrink
toward their prior expectation that is typically zero. The degree of
shrinkage depends upon the relative variance of the errors and the prior
distribution. Although the likelihood estimator is the most accurate from
the point of view of any one voxel, the conditional estimators minimize the
equivalent cost function over the voxels analysed. Currently, in SPM2,
these voxels are all 'in mask' or intracranial voxels. In short, the
conditional estimators will not be the best for each voxel but will, on
average the best over all voxels. The conditional estimators provided in
SPM2 rely upon an empirical Bayesian approach, where the priors are
estimated from the data. Empirical Bayes rest on a hierarchical observation
model. The one employed by SPM2 is perhaps, the simplest and harnesses the
fact that imaging time-series implicitly look for the same effect over many
voxels. Consequently, a second level, in the observation hierarchy, is
provided by voxel to voxel variation in the parameter estimates that is
used as the prior variance. Put simply, the priors are estimated by
partitioning the observed variance at each and every voxel into a
voxel-specific error term and a voxel wide component generated by voxel to
voxel variations in the parameter estimates. Giving the prior variance, one
is in a position to work out the posterior or conditional distribution of
the parameter estimates. In turn, this allows the computation of the
posterior probability that the estimate exceeds some specified threshold.
These posterior probabilities constitute a posterior probability map (PPM)
that is a summary of the posterior density specific to the specified
threshold.

Having performed the usual least squares estimation simply select Bayesian
estimators and the appropriate SPM.mat file. Additional Beta images will be
created that are prefixed with a C to denote conditional estimates. When
proceeding to inference, if the contrast specified is a t contrast (and
conditional estimates are available) you will be asked whether the
inference should be Bayesian or Classical. If you select Bayesian you will
then be prompted for a threshold to form the posterior probability maps. By
default this is one standard deviation of the prior distribution. The
resulting PPM is treated in exactly the same way as the SPM, with the
exception of P value adjustments that are required only in a classical
context. Inferences based upon PPMs will generally have higher face
validity, in that they refer to activation effects or differences among
cohorts that exceed a meaningful size. Furthermore, by thresholding the PPM
appropriately one can infer that activation did not occur. This is
important in terms of characterising treatment effects or indeed true
functional segregation.

Software implementation
The expected variance components over voxels attributable to error and
parameter variation at the second level (i.e. voxels) are estimated using
ReML overall intracranial voxels. spm_spm_Bayes.m then revisits each voxel
in turn to compute conditional parameter estimates and voxel-specific
hyper-parameters. The conditional variance is computed using a first-order
Taylor expansion about the expectation of the hyperparameters over voxels.

Inference based on False Discovery rate
In addition to the Gaussian field correction 'adjusted' p values are now
provided based on FDR. For a given threshold on n SPM, the False Discovery
Rate is the proportion of supra-threshold voxels which are false positives.

Recall that the thresholding of each voxel consists of a hypothesis test,
where the null hypothesis is rejected if the statistic is larger than
threshold. In this terminology, the FDR is the proportion of rejected tests
where the null hypothesis is actually true. A FDR procedure produces a
threshold that controls the expected FDR at or below q. In comparison, a
traditional multiple comparisons procedure (e.g. Bonferroni or random field
correction) controls Family-wise Error rate (FWE) at or below alpha. FWER
is the chance of one or more false positives anywhere (not just among
supra-threshold voxels). If there is truly no signal in the image anywhere,
then a FDR procedure controls FWER, just as Bonferroni and random field
methods do. (Precisely, controlling E(FDR) yields weak control of FWE). If
there is some signal in the image, a FDR method will be more powerful than
a traditional method.

For careful definition of FDR-adjusted p-values (and distinction between
corrected and adjusted p-values) see Yekutieli & Benjamini (1999).

Dynamic Causal Modelling
Dynamic causal modelling is the final option in the data analysis stream
and enables inferences about inter-regional coupling. It is based upon a
generalised convolution model of how experimentally manipulated inputs
evoked changes in particular cortical areas and how these changes induced
changes elsewhere in the brain. The underlying model is an
input-state-output dynamical model that incorporates the hemodynamic model
described in the literature. Critically, the most important [coupling]
parameters of this model are connection strengths among pre-selected
volumes of interest. Conditional estimates of these connection strengths
are derived using a posterior mode analysis that rests upon exactly the
same EM algorithm used in the previous sections. In other words, the
algorithm performs a gradient assent on the log posterior probability of
the connection strengths given the data. The utility of dynamic causal
modelling is that it enables inferences about connection strengths and the
modulation of connection strengths by experimentally defined effects. For
example, one can make inferences about the existence of effective
connections from posterior parietal cortex to V5 and how attention
increases or decreases this effective connectivity. Interestingly, if one
took the dynamic causal model, focussed on a single region and linearised
it through a first-order Taylor expansion, one would end up with exactly
the same convolution model used in conventional analyses with a hemodynamic
response function and various partial derivatives. This means that the
standard approach to fMRI time-series is a special case of a dynamic causal
model. In contradistinction to previous approaches to effective
connectivity, dynamic causal modelling views all measured brain responses
as evoked by experimental design. The only noise or stochastic component is
observation error. This contrasts with structural equation modelling, and
other variants, where it is assumes that the dynamics are driven by
unobservable noise. We regard dynamic causal modelling (DCM) as a much more
plausible approach to effective connectivity given fMRI data from designed
experiments.

User specification
To specify s DCM, first extract the required volumes of interest using VOI
in the results section. These are stored in VOI.mat files. You will be
requested to select from these files. In addition you will be asked to
specify the experimental inputs or causes that are exactly the same as
those used to construct the original design matrix. Once the program knows
the number of regions, and experimental inputs it has to accommodate, it
will ask you to specify the connectivity structure, first in terms of
intrinsic connections and then in terms of those that are modulated by each
input. Once the specification is complete the model parameters will be
estimated and can be reviewed using the DCM results section. Inferences for
DCM are based upon posterior probabilities and require no correction for
multiple comparisons.

Software implementation
The identification of a DCM uses a generic set of routines for non-linear
system identification under Gaussian assumptions. Essentially, these can be
regarded as Gauss-Newton assent upon the log of the posterior probability.
This provides the maximum aposteriori or posterior mode and can therefore
be regarded as a posterior mode analysis using a Gaussian approximation to
the posterior density. The posterior probabilities are based upon fully
Bayesian priors that, in turn, reflect natural constraints upon the system
that ensure it does not exponentially diverge. Priors on the biophysical
parameters of the hemodynamic response per se were determined using
previous empirical studies. See spm_dcm_ui.m and spm_nlsi.m for operational
details.

Hemodynamic modelling
This, from a technical point of view, is exactly the same as DCM but is
restricted to a single voxel or region. In this instance the interesting
parameters are the connection of the experimentally designed inputs or
causes to flow-inducing state variables of the hemodynamic model. The
posterior distributions of these connections or efficacies are provided to
enable inferences about the ability of a particular input or compound of
causes to evoke a response.

User specification
Within the results section, having placed the cursor over the volume of
interest, you will be asked to specify which experimental causes you want
to enter into the model. The computation then proceeds automatically and a
separate figures window will show the results.

Software implementation
This is a streamlined version of the DCM. See spm_hdm_ui.m for details.

Hemodynamic deconvolution
There is an option in the main menu to form PPIs from fMRI data. PPIs are
psychophysiological or physiophysiological interaction terms that are
usually entered into a linear model as bilinear terms. Because the
interaction takes place at a neuronal level (as opposed to hemodynamic it
is necessary to deconvolve the observed fMRI time-series to estimate the
underlying neuronal responses. This deconvolution uses the same EM
procedure and hierarchical principles used by the Bayesian analyses above.
It can be regarded as Weiner filtering using empirically determined priors
on the frequency structure of the neuronal signal.

Bias Correction
A module for bias correcting MR images has been incorporated into SPM2.
Applying bias correction to MR images may allow more accurate spatial
registration (or other processing) to be achieved with images corrupted by
smooth intensity variations. The correction model is non-parametric, and is
based on minimising a function related to the entropy of the image
intensity histogram. A number of approaches already exist that minimise the
entropy of the intensity histogram of the log-transformed image. This
approach is slightly different in that the cost function is based on
histograms of the original intensities, but includes a correction so that
the solution is not biased towards a uniform scaling of all zeros.

Improvements - Structural
Although the look and feel and SPM2 is very similar to SPM99 there have
been a number of implementation improvements. The general strategy is to
ensure the code is as simple and accessible as possible and yet maintain a
degree of robustness and efficiency. Specific changes are detailed below:
Software architecture
A number of the revisions detailed above emphasise that modularity has been
preserved and that we have tried to make the code as simple and readable as
possible. A key change in terms of the architecture is the consolidation of
the .mat files that contain the analysis variables and parameters. These
files have been consolidated in such a way that to use SPM in batch mode
should be much easier. We have done this as a prelude to planned work using
mark-up languages (XML) to facilitate the review, specification and
implementation of SPM procedures. In brief, SPM2 sets up a single structure
(SPM) at the beginning of each analysis and, as the analysis proceeds,
fills in sub-fields in a hierarchical fashion. This enables one to fill in
early fields automatically and bypass the user interface prompting. After
the design specification fields have been filled in the design matrix is
computed and placed in a design matrix structure. This, along with a data
structure and non-sphericity structure is used by SPM to compute the
parameters and hyperparameters. These are saved (as handles to the
parameter and hyperparameter images) as sub-fields of SPM. A contrast
sub-structure is generated automatically and can be augmented at any stage.
This structure array can be filled in automatically after estimation using
spm_contrasts.m. The hierarchical organisation of the sub-function calls
and the SPM structure means that, after a few specification fields are set
in the SPM structure, an entire analysis, complete with contrasts can be
implemented automatically. An example of a text file that fills in the
initial fields of SPM and then calls the required functions can be found in
spm_batch.m. Key fields include

    * SPM.xY - data structure
    * SPM.nscan - vector of scans per session
    * SPM.xBF - Basis function structure
    * SPM.Sess - Session structure
    * SPM.xX - Design matrix structure
    * SPM.xGX - Global variate structure
    * SPM.xVi - Non-sphericity structure
    * SPM.xM - Masking structure
    * SPM.xsDes - Design description structure
    * SPM.xCon - Contrast Structure

File formats
Currently, the Neuroimaging Information Technology Initiative (NIfTI) is
deciding on a strategy to allow different software to share data more
easily. This may involve the development of a new file format for possible
adoption by the neuro-imaging community. No decisions have yet been made,
but eventually, the recommendations by the group should be incorporated
into SPM. Unfortunately, this will not be SPM2. There will however be
slight changes to the SPM file format. By default, images are currently
flipped in the left-right direction during spatial normalisation. This is
for historical reasons and relates to the co-ordinate system of Talairach
and Tournoux being a right-handed co-ordinate system, whereas the
co-ordinate system adopted for storing Analyze images is left-handed. This
will be streamlined, but will require each site to adopt a consistent
co-ordinates system handedness for their images (specified in a file in the
SPM2 distribution). One side effect of this is that images spatially
normalised using SPM99 or earlier will need to be spatially normalised again.

Improvements - Functional
Interpolation
Sinc interpolation is a classical interpolation method that locally
convolves the image with some form of interpolant. Much more efficient
re-sampling can be performed using generalised interpolation (Thévenaz,
2000), where an image is modelled by a linear combination of basis
functions. A continuous representation of the image intensities can be
obtained by fitting the basis functions through the original intensities of
the image. The matrix of basis functions can be considered as square and
Toeplitz, and the bases usually have compact support. In particular,
spatial interpolation in the realignment and spatial normalisation modules
of SPM2 will use B-splines, which are a family of functions of varying
degree. Interpolation using B-splines of degree 0 or 1 is almost identical
to nearest neighbour or linear interpolation respectively. Higher degree
interpolation (2 and above) begins with a very efficient deconvolution of
the basis functions from the original image, to produce an image of basis
function coefficients. Image intensities at new positions can then be
reconstructed using the appropriate linear combination the basis functions.
Much of the interpolation code was based on (with permission from Philippe
Thévenaz) algorithms released by the Biomedical Imaging Group
(http://bigwww.epfl.ch/) at the Swiss Federal Institute of Technology
Lausanne (EPFL).

Realignment
Image realignment involves estimating a set of 6 rigid-body transformation
parameters for each image in the time series. For each image, this is done
by finding the parameters that minimise the mean squared difference between
it and a reference image. It is not possible to exhaustively search through
the whole 6-dimensional (7 if the intensity scaling is included) parameter
space, so the algorithm makes an initial parameter estimate (zero rotations
and translations), and begins to iteratively search from there. At each
iteration, the model is evaluated using the current parameter estimates,
and the cost function re-computed. A judgment is then made about how the
parameter estimates should be modified, before continuing on to the next
iteration. This optimisation is terminated when the cost function stops
decreasing.

In order to be successful, the cost function needs to be smooth.
Interpolation artefacts are one reason why the cost function may not be
smooth. Using trilinear interpolation, sampling in the centre of 8 voxels
effectively involves taking the weighted average of these voxels, which
introduces smoothing. Therefore, an image translated by half of a voxel in
three directions will be smoother than an image that has been translated by
a whole number of voxels. The mean squared difference between smoothed
images tends to be slightly lower than that for un-smoothed ones, which has
the effect of introducing unwanted "texture" into the cost function
landscape. Dithering the way that the images are sampled has the effect of
reducing this texture. This has been done for the SPM2 realignment, which
means that less spatial smoothing is necessary for the algorithm to work.
Re-sampling the images uses B-spline interpolation, which is more efficient
than the windowed sinc interpolation of SPM99 and earlier.

The optional adjustment step has been removed, mostly because it is more
correct to include the estimated motion parameters as confounds in the
statistical model than it is to remove them at the stage of image
realignment. This also means that each image can be re-sliced one at a
time, which allows more efficient image I/O to be used. This extra
efficiency should be seen throughout SPM2.

Coregistration
The old default three-step coregistration procedure of SPM99 is now
obsolete. The approach in SPM2 involves optimising an information theoretic
objective function. The original mutual information coregistration in the
original SPM99 release exhibited occasional problems due to interpolation
artefact, but these have now been largely resolved using spatial dithering
(see above). Information theoretic measures allow much more flexible image
registration as they make fewer assumptions about the images. A whole range
of different types of images can now be more easily coregistered in SPM.

Spatial Normalisation
Estimating the spatial transformation that best warps an image to match one
of the SPM template images is a two step procedure, beginning with a local
optimisation of the 12-parameters of an affine transformation. This step
has been made more robust by making the procedure more internally
consistent. Affine registering image A to match image B should now produce
a result that is much closer to the inverse of the affine transformation
that matches image B to image A. The regularisation (a procedure for
increasing stability) of the affine registration has also changed. The
penalty for unlikely warps is now based on the matrix log of the affine
transformation matrix (after removing rotation and translation components)
being multivariate and normal.

The non-linear component has also changed slightly, in that the bending
energy of the warps is used to regularise the procedure, rather than the
membrane energy. The bending-energy model seems to produce more realistic
looking distortions.

Segmentation
The segmentation model has been updated in order to improve the bias
correction component. The version in SPM99 tended to produce a dip in the
middle of the bias corrected images. This is because the bias correction
had a tendency towards scaling the image uniformly by zero (lowest entropy
solution), but was prevented from doing so because the bias field was
constrained to have an average value of one. Because the bias was only
determined from grey and white matter, it tended to push down the intensity
in these regions, and compensated by increasing the intensity of other
regions. This is now fixed with a new objective function.

The segmentation procedure also includes a "cleanup" procedure whereby
small regions of non-brain tissue that are misclassified as brain are
removed. Also, the prior probability images are scaled such that they have
less influence on the segmentation. Previously, abnormal brains could be
extremely problematic if there was (e.g.) CSF where the prior probability
images suggested that there was 0% probability of obtaining it. The
modified code may be able to cope slightly better with these abnormalities.

Specifying fMRI designs (spm_fmri_spm_ui.m)
fMRI design specification is now simpler. The distinction between event-
and epoch-related designs has been effectively removed by allowing the user
to specify stimulus functions that comprise a series of variable-length
boxcars. If the length reduces to zero one is implicitly modelling an
event. This contrasts with SPM99 where the distinction between event- and
epoch-related responses was made at the level of the basis functions (a
hemodynamic basis set or a boxcar epoch set). The main reason for putting
all the design-specific effects in the input functions, as opposed to the
basis functions, is to finesse the specification of inputs to DCM and
hemodynamic models. From the point of view of dynamical modelling the basis
functions simply serve to approximate the impulse response function of the
input-state-output model that each voxel represents. Consequently, the
basis functions you specify pertain to and only to the hemodynamic response
function and these basis functions are assumed to be the same for all
sessions. Onsets and durations of various trials or conditions can now be
specified in scans or seconds. The option to perform a second-order or
generalised convolution analysis using Volterra kernels is now assumed to
be the same for all sessions. You can now also specify negative onset times
up to 32 time bins (that default to 1/16 inter-scan intervals).

Estimation (spm_spm.m)
spm_spm.m has been considerably simplified. Firstly, the Y.mad file, which
used to store the time-series of voxels surviving an F test for all the
effects of interest, has been removed. This means that you have to keep the
original data in order to plot the responses. Although this decreases
robustness it very much simplifies the software and allows you to
interrogate any brain region using spm_graph.m at the point of inference
and reviewing your results. The second simplification is that smoothness
estimation is now performed on the residual fields after estimation. This
means that spm_spm.m calls separate subroutines after the estimation has
completed. This smoothness estimation is slower but is much more accurate
because it computes the partial derivatives of the residual fields using a
more finessed interpolation scheme.

As mentioned above spm_spm.m now performs WLS that defaults to ML
estimation using the estimated of specified non-sphericity in SPM.xVi.V.
The weighting matrix is in SPM.xX.W. If SPM.xVi.V is not specified ReML
hyperparameters are estimated in a first pass of the data according to the
covariance components in SPM.xVi.Vi.

Results (spm_results_ui.m)
The main revision to the results section has been in terms of plotting
where the standard error bars have now been replaced by 90 confidence
intervals. The plotting of event-related responses has been upgraded to
provide true fixed impulse response (FIR) estimates, for both event- and
epoch-related designs. This is equivalent to a peri-stimulus time histogram
of hemodynamic responses and is estimated by refitting an FIR model at the
selected voxel.

Users can now simultaneously display a table of results and the functional
activations. After displaying results, click on the Results-Fig button to
the right of the menu on the graphic figure. This will open a new window.
Clicking on the volume or cluster buttons will now display the results
table in this figure. Slices and sections will still display in the main
graphics figure. The Results-Fig is fully active and clicking on individual
lines in the table will move the cursor to the appropriate points on the
displayed sections.

Availability & licencing

SPM is made freely available to the [neuro]imaging community, to promote
collaboration and a common analysis scheme across laboratories. The
software represents the implementation of the theoretical concepts of
Statistical Parametric Mapping in a complete analysis package.

SPM (being the collection of files given in the manifest in the Contents.m
file included with each distribution) is free but copyright software,
distributed under the terms of the GNU General Public Licence as published
by the Free Software Foundation (either version 2, as given in file
spm_LICENCE.man, or at your option, any later version). Further details on
"copyleft" can be found at http://www.gnu.org/copyleft/. In particular, SPM
is supplied as is. No formal support or maintenance is provided or implied.

The authors are research scientists in the fields of neuroscience,
statistics and image processing; for whom SPM is the vehicle for
implementation and dissemination of ideas. We aren't software engineers,
and (unfortunately) don't have the resources to formally support SPM. The
SPM email discussion list <[log in to unmask]> provides an informal forum
for discussion of technical and theoretical SPM issues, and is monitored by
the authors. We ask that you read the SPM documentation, review past
discussion on the email list, and exhaust your local avenues of SPM
expertise before contacting us, either directly or via the SPM discussion
list.

Top of Message | Previous Page | Permalink

JiscMail Tools


RSS Feeds and Sharing


Advanced Options


Archives

May 2024
April 2024
March 2024
February 2024
January 2024
December 2023
November 2023
October 2023
September 2023
August 2023
July 2023
June 2023
May 2023
April 2023
March 2023
February 2023
January 2023
December 2022
November 2022
October 2022
September 2022
August 2022
July 2022
June 2022
May 2022
April 2022
March 2022
February 2022
January 2022
December 2021
November 2021
October 2021
September 2021
August 2021
July 2021
June 2021
May 2021
April 2021
March 2021
February 2021
January 2021
December 2020
November 2020
October 2020
September 2020
August 2020
July 2020
June 2020
May 2020
April 2020
March 2020
February 2020
January 2020
December 2019
November 2019
October 2019
September 2019
August 2019
July 2019
June 2019
May 2019
April 2019
March 2019
February 2019
January 2019
December 2018
November 2018
October 2018
September 2018
August 2018
July 2018
June 2018
May 2018
April 2018
March 2018
February 2018
January 2018
December 2017
November 2017
October 2017
September 2017
August 2017
July 2017
June 2017
May 2017
April 2017
March 2017
February 2017
January 2017
December 2016
November 2016
October 2016
September 2016
August 2016
July 2016
June 2016
May 2016
April 2016
March 2016
February 2016
January 2016
December 2015
November 2015
October 2015
September 2015
August 2015
July 2015
June 2015
May 2015
April 2015
March 2015
February 2015
January 2015
December 2014
November 2014
October 2014
September 2014
August 2014
July 2014
June 2014
May 2014
April 2014
March 2014
February 2014
January 2014
December 2013
November 2013
October 2013
September 2013
August 2013
July 2013
June 2013
May 2013
April 2013
March 2013
February 2013
January 2013
December 2012
November 2012
October 2012
September 2012
August 2012
July 2012
June 2012
May 2012
April 2012
March 2012
February 2012
January 2012
December 2011
November 2011
October 2011
September 2011
August 2011
July 2011
June 2011
May 2011
April 2011
March 2011
February 2011
January 2011
December 2010
November 2010
October 2010
September 2010
August 2010
July 2010
June 2010
May 2010
April 2010
March 2010
February 2010
January 2010
December 2009
November 2009
October 2009
September 2009
August 2009
July 2009
June 2009
May 2009
April 2009
March 2009
February 2009
January 2009
December 2008
November 2008
October 2008
September 2008
August 2008
July 2008
June 2008
May 2008
April 2008
March 2008
February 2008
January 2008
December 2007
November 2007
October 2007
September 2007
August 2007
July 2007
June 2007
May 2007
April 2007
March 2007
February 2007
January 2007
2006
2005
2004
2003
2002
2001
2000
1999
1998


JiscMail is a Jisc service.

View our service policies at https://www.jiscmail.ac.uk/policyandsecurity/ and Jisc's privacy policy at https://www.jisc.ac.uk/website/privacy-notice

For help and support help@jisc.ac.uk

Secured by F-Secure Anti-Virus CataList Email List Search Powered by the LISTSERV Email List Manager