** The Music and Science list is managed by the Institute of Musical Research (www.music.sas.ac.uk) as a bulletin board and discussion forum for researchers working at the shared boundaries of science and music. **
MESSAGE FOLLOWS:
Dear all,
On Thursday, 26th April at 2:00pm, Ramani Duraiswami will present the
seminar 'Capturing, Visualizing and Recreating Spatial Sound'.
Please note that the talk will take place in the Maths lecture theatre in
the Mathematical Sciences building, Queen Mary University of London, Mile
End Road, London E1 4NS.
Directions on how to access the building can be found at
http://www.eecs.qmul.ac.uk/about/campus-map.php. If you are coming from
outside Queen Mary, please let me know, so I can make sure no-one is stuck
outside the doors. Details of future seminars can be found at
http://www.eecs.qmul.ac.uk/newsevents/researchgroupevents.php?i=12.
All are welcome to attend. For those unable to do so, a video recording of
the seminar will be made available online after a few days.
If you wish to be added to / removed from our mailing list, please send me
an email and I'll be happy to do so.
Thursday's seminar (26th April, 2:00pm):
Title:
Capturing, Visualizing and Recreating Spatial Sound
Speaker:
Ramani Duraiswami
Abstract:
The sound field at a point contains information on the spatial origin of
the sound, and humans use this information in making sense of the
environment. When we hear sound, that sound is filtered by interaction
with the environment and our bodies. This process endows the sound with
cues that are then decoded by the neural system to perceive the world
auditorily in three dimensions. To capture and reproduce this directional
information in the sound we need a spatial representation of the sound,
and a means to capture and manipulate the sound in this representation. We
have explored two classical mathematical physics based representations of
directional sound - in terms of spherical wave functions and in terms of
plane wave expansions. We have developed spherical microphone arrays that
allow the captured sound to be represented directly in these basis.
Plane-wave beamforming allows the sound-field at a point to be visualized
as an image, much as a video camera images the light-field at a given
point. The registration of the audio images with visual images allows a
new way to perform audio-visual scene analysis. Several examples are
presented at http://goo.gl/igflH.
The captured sound can be used to recreate spatial sound scenes over
headphones that allows perception of the original scene. For the
reproduction, our approach incorporates individualized HRTFs (measured via
a novel reciprocal technique), room modeling, and tracking.
(Joint work with Adam O'Donovan, Dmitry Zotkin and Nail A. Gumerov)
Bio:
Ramani Duraiswami is a member of the faculty of the department of computer
science at the University of Maryland, College Park. He has broad research
interests in a number of areas including scientific computing, spatial
audio, machine learning and computer vision. He has a Ph.D. from Johns
Hopkins and a B.Tech. from IIT Bombay. See
http://www.umiacs.umd.edu/~ramani for more on his research, and
http://www.visisonics.com for more on VisiSonics.
Future C4DM seminars (Seminar details tbc):
Wenwu Wang - University of Surrey
Wed 25th April 2012
Martyn Davies - Six Two Productions
Wed 9th May 2012
Miguel Rodrigues - University College London
Wed 16th May 2012
--
Peter Foster
Postgraduate Research Student
Room 104, Electronic Engineering Bldg
Centre for Digital Music
Queen Mary, University of London
Mile End Road, London E1 4NS, UK
email: [log in to unmask]
|