**** AISB-SGAI Artificial Intelligence Evening Lectures ****
The two British AI organisations The Society for the Study of Artificial
Intelligence and the BCS Specialist Group on Artificial Intelligence are
delighted to announce we are resuming the Artificial Intelligence Evening
Lectures at The City University in London.
Speaker:Aaron Sloman, University of Birmingham.
Title: When is seeing (possibly in your mind's eye) better than deducing,
for reasoning?
Time and Place: 8th of March at 5pm (for 5.15pm) in CM505 of the Tait
Building of The City University
Directions: http://www.city.ac.uk/maps/buildings/tait.html
Anyone with an interest in AI is most welcome.
Abstract: Over many years, like many others interested in how human minds
work and how human mental functioning might be replicated in machines, I
have been trying to understand the role of spatial/visual/diagrammatic
reasoning both in mathematics and in everyday life, and how it depends on
aspects of human vision with a very old evolutionary history. My first AI
paper on this was a critique at IJCAI 1971, in London, of logicist AI as
summarised by McCarthy and Hayes in 1969. The 1971 paper, slightly revised
as chapter 7 of The Computer Revolution in Philosophy (1978
http://www.cs.bham.ac.uk/research/cogaff/crp/chap7.html) analysed a
(non-exhaustive) distinction between 'Fregean' and 'analogical'
representations, arguing that both were required by intelligent systems
and that both could be used in rigorous reasoning. However, it was clear
that for real progress in this area we would need to achieve a deep
understanding and deep modelling of human vision. This remains a distant
goal despite much progress on tiny fragments of the problem, e.g. use of
images in recognition, tracking, robot-localisation, or production of
object representations suitable for image generation. What all that lacked
was the understanding of relations between spatial structure, motion and
affordances. Seeing affordances involves seeing not merely what exists but
what *can* and *cannot* exist, and how that is related to what exists,
whereas all work on machine vision seems to be solely concerned with
representing what actually exists in images or scenes depicted. I am still
not able to present a working human-like visual system capable of visual
reasoning, but I hope to present some new ideas about *requirements* for
such a system which emerged recently in the course of the EC-funded CoSy
robot project. Work on a robot with 3-D manipulative capabilities made me
realise that vision involves simulating concurrent processes at different
levels of abstraction in registration with each other and the optic array.
I shall try to explain what that means and show how it relates to visual
reasoning, e.g. in mathematics or planning, and, if there is time, how it
suggests a host of research questions about the multiple 'orthogonal'
competences required for learning to see as well as a typical 5 year old
child does.
More information can be found in these presentations:
http://www.cs.bham.ac.uk/research/cogaff/talks/
Dr Andrew Tuson MA(Oxon) MSc(Edin) PhD(Edin) MBCS ([log in to unmask])
Senior Lecturer, Department of Computing, City University, London, UK.
|