[EMS-NEWS]
Synthetic Performers with Embedded Audio Processing
Wednesday, 26 October, 4pm
Ben Pimlott Lecture Theatre
Prof. Barry Vercoe
Professor of Media, Arts & Sciences, MIT; Assoc Academic Head & Founding
Member, MIT Media Lab.
http://web.media.mit.edu/~bv/
Abstract: This talk will trace the development of advanced human-computer
music interaction, from the author's first developments in Paris (IRCAM) in
the 80s to the world's first software-only professional audio system released
in Japan in 2002. Digital processing of audio changed in 1990 when it first
became real-time on desk-top machines. Human interaction previously
constrained to custom hardware was suddenly possible on general-purpose
machines, and the 90s saw new experiments in gestural control over complex
audio effects. The pace of development outpaced Moore's Law when cross-
compilers allowed rapid prototyping of audio structures on DSPs using large
amounts of processor power. An interactive music performance system using
hand-held devices running real-time audio software will be demonstrated. The
talk will also be illustrated by other examples of music research at the MIT
Media Lab, including the Audio Spotlight, applications of cognitive audio
processing, compositions from the Experimental Music Studio, soundtrack from a
recent Hollywood movie, and a new method of music recommendation on the
Internet.
Barry Vercoe is Professor of Music and Professor of Media Arts and Sciences at
MIT , and Assoc Academic Head of the Program in Media Arts & Sciences . He was
born and educated in New Zealand in music and in mathematics, then completed a
doctorate in Music Composition at the University of Michigan. In 1968 at
Princeton University he did pioneering work in the field of Digital Audio
Processing, then taught briefly at Yale before joining the MIT faculty in
1971. In 1973 he established the MIT computer facility for Experimental Music
-- an event now commemorated on a plaque in the Kendall Square subway station.
During the '70's and early 80's he pioneered the composition of works
combining computers and live instruments. Then on a Guggenheim Fellowship in
Paris in 1983 he developed a Synthetic Performer -- a computer that could
listen to other performers and play its own part in musical sync, even
learning from rehearsals. In 1992 he won the Computer World / Smithsonian
Award in Media Arts and Entertainment, and recently gained the 2004 SEAMUS
Lifetime Achievement Award. Professor Vercoe was a founding member of the MIT
Media Laboratory in 1984, where he has pursued research in Music Cognition and
Machine Understanding. His several Music Synthesis languages are used around
the world, and a variant of his Csound and NetSound languages has recently
been adopted as the core of MPEG-4 audio -- an international standard that
enables efficient transmission of audio over the Internet. At the Media Lab he
currently directs research in Machine Listening and Digital Audio Synthesis
(Music, Mind and Machine group), and is Associate Academic Head of its
graduate program in Media Arts and Sciences.
--
Dr John Levack Drever
Music Department
Goldsmiths College
University of London
New Cross
London
SE14 6NW
T: 02079197652
- - - - - - - - - -
If you do not wish to receive information from 'EMS_NEWS' and want to unsubscribe, or if your email address has recently changed, please email notification to [log in to unmask]
To send an email to this mailing list for distribution to other members,
please email it to "[log in to unmask]"
The EMS NEWS Archive ( July 2001-present) is available online at
"http://www.jiscmail.ac.uk/lists/ems-news.html"
|