MULTI-RESOLUTION AND MULTISCALE METHODS IN IMAGE PROCESSING
Joint meeting of
RSS Study Group in Statistical Image Analysis and Processing
and the British Machine Vision Association
Wednesday 5th April 2000
Royal Statistical Society, 12 Errol Street, London EC1Y 8LX
Programme:
10.00 Coffee
10.30 Simplifying images
Lewis Griffin (Vision Sciences, Aston)
11.15 Location dependent models of image variation
Idris Eckley (Mathematics, Bristol)
12.00 A multirate, circular Hough transform using wavelets
A. A. Bharath (Biological & Medical Systems, Imperial College)
12.30 Nonlinear gray-scale granulometries for texture analysis
Jennifer McKenzie (Electronic and Electrical Engineering, Strathclyde)
13.00 Lunch
14.00 Multi-resolution algorithms and subpixel restoration
Chris Jennison (Mathematical Sciences, Bath)
14.50 Multiresolution colour texture segmentation
Maria Petrou (Electronic Engineering, Surrey)
15.40 Tea
16.00 Multiscale entropy for semantic description of images and signals
Fionn Murtagh (Computer Science, Queen's, Belfast)
16.50 Close of meeting
All welcome. Attendance free; no pre-registration necessary.
Sandwiches will be available at lunchtime.
Nearest underground stations: Old Street, Moorgate and Barbican.
Tel: 0171-638-8998
==============================================================================
Abstracts
=========
Simplifying images
Lewis Griffin (Vision Sciences, Aston)
If an image is simplified by repeatedly replacing its values with the mean
in an infinitesimal neighbourhood it evolves according to the diffusion
equation Ltt=Lxx+Lyy (subscripts denote differentiation). This can equation
can alternatively be written in gauge coordinates as Lt=Lvv+Lww; where the
v-direction is tangent to the isophote and the w- is along the gradient. The
alternative evolution scheme of Lt=Lvv, known as mean curvature flow, has
also attracted attention. Guichard & Morel [1996, Ceremade Technical Report
#9335] showed that this equation describes the operation of repeatedly
applied infinitesimal median filtering. I have proved that repeated
infinitesimal mode filtering is described by Lt=Lvv-2Lww.
Location dependent models of image variation
Idris Eckley (Mathematics, Bristol)
This talk considers the application of wavelets to the problem of
modelling and estimating the covariance structure of an image. Our
approach introduces the concept of locally stationary, two-dimensional
wavelet processes; a class of processes which permits the second order
structure of an image to vary over space. In contrast to the Fourier
decomposition, the locally stationary wavelet process supplies a form of
location-scale decomposition. As a consequence, certain
location-dependent measures of variation may be defined which could, for
example, be applied in the analysis of textured images. We conclude by
outlining some recent developments within this modelling approach.
A multirate, circular Hough transform using wavelets
A. A. Bharath (Biological & Medical Systems, Imperial College)
The Compact Hough Transform can be expressed as a spatial filtering of
image feature fields. Using steerable filters, this can be implemented
as a convolution between a number of wavelet-like basis masks, and a
series of "control" images. We apply this idea to design a circular
Hough transform which constructs accumulator spaces for a number of
circle sizes. The operators used for estimating gradient data are the
multi-rate operators of a steerable quadrature wavelet pyramid, and the
gradient magnitude data is used to weight accumulator contributions. The
issue of interpolating between accumulator spaces to achieve fine
precision in parameter space is proposed as an area for further
research.
Nonlinear gray-scale granulometries for texture analysis
Jennifer McKenzie (Electronic and Electrical Engineering, Strathclyde)
Nonlinear gray-scale granulometries are multiresolution tools which may
be used for image decomposition. They act as an image sieves, allowing
larger and larger 'grains' of image data to fall through at each step of
the granulometric process. The degree by which the volume of the image
is decreased at each sieving stage can be used for statistical analysis
and comparisons of the image and the size, shape and orientation of its
'grains', allowing analysis and classification of image texture content.
The above properties allow segmentation and texture analysis which
is translation and rotation invariant, and which has a high tolerance to
changes in lighting levels. This paper investigates their usefulness
for automatic inspection processes, and investigates how the mesh
dimensions used affect the accuracy of the results.
Multi-resolution algorithms and subpixel restoration
Chris Jennison (Mathematical Sciences, Bath)
The forms of probabilistic model assumed in Bayesian image analysis can
be applied at various scales of resolution. In particular, they are not
restricted to the pixel scale of the imaging process. Thus, under
suitable assumptions, one may restore detail at below the pixel level.
In implementing subpixel algorithms it is natural to start work at
coarser levels, especially when data are noisy; indeed, such a
multi-resolution strategy can be of benefit more generally when
pixel-based MCMC methods run too slowly. I shall present research with
colleagues at Bath and a variety of examples to illustrate these ideas.
Multiresolution colour texture segmentation
Maria Petrou (Electronic Engineering, Surrey)
Colour is a property that is meaningful only in relation to the human
vision system. The quality of a generic colour texture segmentation
algorithm, therefore, can only be judged if it agrees with the
segmentation performed by humans. Although texture is a property that
is often associated with scale, this is not so for colour. However,
recent psychophysical studies have shown that colour perception is
resolution dependent. A colour texture segmentation scheme will be
presented that takes into consideration the colour human perception as a
function of the resolution. The scheme utilises a multiresolution
probabilistic relaxation approach to propagate the segmentation results
from one level of resolution to the next.
Multiscale entropy for semantic description of images and signals
Fionn Murtagh (Computer Science, Queen's, Belfast)
Multiscale entropy is based on the wavelet transform and noise
modeling. We describe how it can be used for signal and image
filtering and deconvolution. We then proceed to the use of
multiscale entropy for description of image content. We pursue
two directions of enquiry: determining whether signal is present
in the image or not, possibly at or below the image's noise level;
and how multiscale entropy is very well correlated with the
image's content in the case of astronomical stellar fields. Knowing
that multiscale entropy represents well the content of the image,
we finally use it to define the optimal compression rate of the
image. In all cases, a range of examples illustrate these new results.
**************************************************************************
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|