Royal Statistical Society
Statistical Computing Section Meeting
Wednesday 30 May, 2.00 - 4.30pm (tea at 3.20pm)
at
The Royal Statistical Society, 12 Errol St, London EC1
(nearest UG stations, Barbican, Moorgate & Old Street)
Statistical Computing Section AGM followed by meeting.
Relevance Vector Machines: New methods for Classification
-------------------------------------------------------
and Regression
--------------
Support Vector Machine (SVM) classifiers have recently
been the subject of intense research activity within the
pattern recognition and machine learning community. They
are the State-of-the-Art methods and said to be faster than
neural networks. This meeting aims to introduce the methods
associated with SVM to the wider statistical community.
2:10pm Sean Holden (University College London )
"An introduction to Support Vector Machines"
Synopsis: This talk gives an introduction to Support Vector
Machines (SVMs). SVMs are a relatively recent approach to
the classical problems of classification, regression and
pattern recognition. They combine an excellent foundation
in probability theory, functional analysis, and
computational learning theory with a computational foundation in
numerical analysis and widespread success in practical applications.
2:40 Peter Sollich (Kings College London )
"Probabilistic methods for Support Vector Machines"
Synopsis: I describe a framework for interpreting SVMs as maximum a posteriori (MAP) solutions for inference problems with
Gaussian process priors. This can provide intuitive guidelines for choosing a "good" SVM kernel. It can also assign (by
evidence maximization) optimal values for parameters such as the choice level C, which cannot be determined unambiguously
from properties of the MAP solution alone (such as cross-validation error). I illustrate this using a simple approximate
expression for the SVM evidence. Once C has been determined, class probabilities for SVM predictions can also be obtained.
3:20 Tea
3:50 Mike Tipping (Microsoft Research UK)
"Sparse Bayesian Learning and the Relevance Vector Machine"
Synopsis: I will demonstrate how techniques from Bayesian
statistics can be exploited to obtain models for real-value
prediction or object classification that not only are
typically very accurate, but are also highly 'sparse'. That
is, if we restrict ourselves to models whose output is a
linearly-weighted sum of a potentially very large number of
candidate nonlinear basis functions, we can automatically determine
which are most 'relevant' for prediction, and discard all others in an
automatic and principled manner.
All are welcome. There is no need to book.
----------------------
Suzanne Evans
[log in to unmask]
|