UNIVERSITY OF GLASGOW
STATISTICS SEMINAR PROGRAMME
Wednesday, 30 January, 3 pm
Category theory and statistical models
Ernst WIT (University of Glasgow)
Wednesday, 20 February, 3 pm
Statistical estimators of entropy of random vector and its
applications in testing of statistical hypotheses
Nikolai LEONENKO (Cardiff University)
Wednesday, 27 February, 3 pm
Some considerations on estimating overcomplete representations
and the kernel eigenvalue problem
Mark GIROLAMI (University of Paisley)
Wednesday, 6 March, 3 pm
Modelling spatially correlated data via mixtures:
A Bayesian approach
Carmen FERNANDEZ (University of St Andrews)
Wednesday, 13 March, 3 pm
Autoregressive aided periodogram bootstrap for time series
Jens-Peter KREISS (TU Braunschweig, Germany)
Seminars take place in Room 203, Mathematics Building,
University of Glasgow
For further information please contact the seminar organiser:
Ilya Molchanov
University of Glasgow : e-mail: [log in to unmask]
Department of Statistics : Ph.: + 44 141 330 5141
Glasgow G12 8QW : Fax: + 44 141 330 4814
Scotland, U.K. : http://www.stats.gla.ac.uk/~ilya/
ABSTRACTS
CATEGORY THEORY AND STATISTICAL MODELS
Every statistical model has natural "extensions," e.g., by adding a
new data point, by omitting one or more factors, by merging several
classes, or by changing the measurement scale. Closure under natural
extensions is a form of logical consistency and models that lack this
property are suspect. Without extendibility a statistical model is
unsuitable for statistical inference.
I'll present an explicit formal description of a statistical model
that incorporates its extensions. Category theory is particularly
suited for this job. The building blocks of category theory are
categories. These are collections of sets and maps on these sets that
are closed under composition and include the set-identities. Functors
are defined as homomorphisms of categories, whereas natural
transformations are homomorphisms of functors. A sensible statistical
model is defined as a natural transformation of the parameter space
functor into the functor of probability measures. It depends on the
structure of the problem which category underlies the statistical
model. For instance, for counts in a contingency table the underlying
category is typically a subcategory of the surjective maps, whereas
for treatments in a designed experiment it is a subcategory of the
injective maps.
As a result of our algebraic analysis of a statistical model several
well-known models for contingency tables and several linear models are
discredited, whereas, on the other hand, several new models are
proposed.
STATISTICAL ESTIMATORS OF ENTROPY OF RANDOM VECTOR AND ITS
APPLICATIONS IN TESTING OF STATISTICAL HYPOTHESES
We consider the problem of estimation of Shannon measure of
information or entropy of random vector, based on independent sample.
We propose a class of asymptotically unbiased and consistent
estimates of unknown entropy and some entropy based tests of goodness
of fit. The main idea for the construction of such a tests is the
maximum entropy principle. We present also some test of independence
based on the sample entropy in the bivariate case.
This talk is based on recent work jointly done by M. Goria, P. L.
Novi Inverardi (University of Trento) and V. Mergel (University of
Florida).
SOME CONSIDERATIONS ON ESTIMATING OVERCOMPLETE REPRESENTATIONS
AND THE KERNEL EIGENVALUE PROBLEM
This talk will be in two parts. The first will consider the problem of
estimating overcomplete representations of natural signals. The second
part will look at some relations between orthogonal series density
estimation and the kernel eigenvalue problem.
Part. 1
Overcomplete representations have been studied in signal and image
processing as a means of providing a compact form of signal
representation. This compactness suggests a sparse representation as
there may be a large set of specialized basis functions available with
which to compose the signal or image but few of these will be required
for any one composition.
The proposal of viewing the decomposition of data into an overcomplete
basis as a probabilistic model was originally presented by Lewicki &
Sejnowski. In that work a sparse prior probability on the basis
coefficients, in the form of a Laplacian, is employed to remove the
redundancy inherent in an overcomplete representation. In the method
of Lewicki & Sejnowski the estimation of the basis vectors requires a
nonlinear (gradient based) optimization of the approximated posterior
density for each data point to obtain the MAP value of the
corresponding basis coefficient which is then used for each gradient
update of the basis vectors. Due to the Laplace approximation which is
employed there is no guarantee that the data likelihood increases at
each iteration. Whereas the Laplace approximation for the data
likelihood is sufficient in generating gradients for learning, the
actual value of the likelihood may be inaccurate giving unreliable
estimates of measures such as coding cost.
In contrast to gradient-based algorithms I will present a method for
estimating overcomplete bases and inferring the most probable
coefficients based on the expectation maximisation (EM) algorithm. The
intractable nature of the posterior integral is transformed into a
Gaussian form by representing the prior density as a variational
optimisation problem. This provides a strict lower-bound on the data
likelihood which is optimised within the EM updating procedure. The
necessity for a separate nonlinear optimisation to obtain the MAP
coefficient estimates for each data point is obviated as the MAP value
coincides with the posterior mode and this is available from the
moments of the posterior lower-bound. Due to the well known
convergence properties of the EM algorithm, and the rigorous
lower-bound on the true posterior given by the variational
approximation, there is a guaranteed increase in the data likelihood
which approaches the true likelihood at each EM-step. Some simple
demonstrations of the method will be provided.
Part 2.
Kernel principal component analysis has been introduced as a method of
extracting a set of orthonormal nonlinear features from multi-variate
data and many impressive applications are being reported within the
literature. The second part of this talk presents the view that the
eigenvalue decomposition of a kernel matrix can also provide the
discrete expansion coefficients required for a non-parametric
orthogonal series density estimator. In addition to providing novel
insights into non-parametric density estimation this viewpoint
provides an intuitively appealing interpretation for the nonlinear
features extracted from data using kernel principal component
analysis.
MODELLING SPATIALLY CORRELATED DATA VIA MIXTURES: A BAYESIAN APPROACH
We address the general question of mixture modelling for spatial data,
where each observation takes place at a different geographical
location. In contrast with the standard setting, the weights in the
mixture model are observation-specific and, thus, we can incorporate
spatial dependence through the prior distribution of the weights. The
basic idea that we try to capture is that observations that correspond
to nearby regions are, a priori, more likely to have similar values of
the weights than observations from regions that are far apart.
In this talk we present two alternative models (i.e. prior
distributions on the weights) to achieve this aim, both of which make
use of Markov random fields. We will describe these models in detail
and comment on their similarities and differences. We perform a fully
Bayesian analysis and place prior distributions on component-specific
parameters and the number of mixture components, in addition to the
(spatially structured) uncertainty about the weights. Some aspects of
the numerical implementation (through Reversible Jump MCMC) will also
be discussed.
Finally, these models are applied in the context of disease mapping in
geographical epidemiology. We will first consider several artificial
datasets, designed to reproduce interesting situations that could
occur in practice. Next we analyse a real dataset. We will compare the
results obtained under our two mixture models with those obtained
under a popular model in the disease mapping literature, and highlight
the advantages/disadvantages of each of these approaches.
This is joint work with Peter Green (University of Bristol).
AUTOREGRESSIVE AIDED PERIODOGRAM BOOTSTRAP FOR TIME SERIES
A bootstrap methodology for the periodogram of a stationary process is
proposed which is based on a combination of a time domain parametric
and a frequency domain nonparametric bootstrap.The parametric fit is
used to generate periodogram ordinates that imitate the essential
features of the data and the weak dependence structure of the
periodogram while a nonparametric (kernel based) correction is applied
in order to catch features not represented by the parametric fit.The
asymptotic theory developed shows validity of the proposed bootstrap
procedure for a large class of periodogram statistics. For important
classes of stochastic processes, validity of the new procedure is
established also for periodogram statistics not captured by existing
frequency domain bootstrap methods based on independent periodogram
replicates.
This is a joint paper with Efstathios Paparoditis from the University
of Cyprus.
|