** The Music and Science list is managed by the Institute of Musical Research (www.music.sas.ac.uk) as a bulletin board and discussion forum for researchers working at the shared boundaries of science and music. ** MESSAGE FOLLOWS: Dear all,

Next Wednesday 2 February, at 3pm, Matthias Mauch will present the seminar 'Lyrics-to-Audio Alignment: Methods of Integrating Textual Chord Labels and an Application'.

The seminar will take place in room 105 in the Electronic Engineering building, Queen Mary University of London, Mile End Road, London E1 4NS. Directions of how to get to Queen Mary and details of future seminars can be found at http://www.eecs.qmul.ac.uk/newsevents/researchgroupevents.php?i=12. The room is under access control, so people from outside QM will need to contact C4DM to get in - the lab phone number is +44 (0)20 7882 7480 and if I'm not available, anyone else in the lab should be able to help. If you are coming from outside Queen Mary, please let me know, so I can make sure no-one's stuck outside the doors.

All are welcome to attend. For those unable to do so, a video recording of the seminar will be streamed live and also made available online after a few days. Please see the above website for details.

If you wish to be added to / removed from our mailing list, please send me an email and I'll be happy to do so.


Wednesday's seminar (2 February, 3pm):


Title:
Lyrics-to-Audio Alignment: Methods of Integrating Textual Chord Labels and an Application

Speaker:
Matthias Mauch (National Institute of Advanced Industrial Science and Technology, Japan)


Abstract:
Aligning lyrics to audio has a wide range of applications such as the automatic generation karaoke of scores, song-browsing by lyrics, and the generation of audio thumbnails. Existing methods are restricted to using only lyrics and match them to phoneme features extracted from the audio (usually mel-frequency cepstral coefficients). Our novel idea is to integrate the textual chord information provided in the paired chords-lyrics format known from song books and Internet sites into the inference procedure. We propose two novel methods that implement this idea: firstly, assuming that all chords of a song are known, we extend a hidden Markov model (HMM) framework by including chord changes in the Markov chain and an additional audio feature (chroma) in the emission vector; secondly, for the more realistic case in which some chord information is missing, we present a method that recovers the missing chord information by exploiting repetition in the song. We conducted experiments with five changing parameters and show that with accuracies of 87.5% and 76.0%, respectively, both methods perform better than the baseline with statistical significance. We will demonstrate Song Prompter, a software system that acts as a performance assistant by showing horizontally scrolling lyrics and chords in a graphical user interface, together with an audio accompaniment consisting of bass and MIDI drums. The application shows that the automatic alignment is accurate enough to be used in a musical performance.


Bio:
Matthias Mauch received the Diplom degree in mathematics from the University of Rostock, Germany, in collaboration with the Max-Planck Institute for Demographic Research. He received his Ph.D. degree from the Centre for Digital Music at Queen Mary University of London, U.K. Currently he is a post-doc research scientist at the Media Interaction Group of the National Institute of Advanced Industrial Science and Technology (AIST), Japan. His research focuses on the automatic extraction of high-level musical features from audio, with an emphasis on harmonic progressions and repetitions. He is songwriter in the band Zweieck. Website: http://matthiasmauch.net


Emmanouil Benetos
--
Centre for Digital Music (C4DM)
School of Electronic Engineering and Computer Science
Queen Mary, University of London
[log in to unmask]
Tel: +44 (0)20 7882 7480
Fax: +44 (0)20 7882 7997

C4DM Web-site : http://www.elec.qmul.ac.uk/digitalmusic/index.html