****************************************
foNETiks
A network newsletter for
the International Phonetic Association
and for the Phonetic Sciences
January 2006
***(The editors wish our readers all the best for 2006!)***
*****************************************
Editors:
Linda Shockey, University of Reading, UK Gerry Docherty, University of Newcastle, UK Paul Foulkes, University of York, UK Lisa Lim, Universiteit van Amsterdam, The Netherlands
E-mail address: <[log in to unmask]>
The foNETiks archive can be found on the WWW at: <http://www.jiscmail.ac.uk/lists/fonetiks.html>
Visit the IPA web page at: <http://www.arts.gla.ac.uk/IPA/ipa.html>
*********************************
ANNOUNCEMENTS
[new ones marked ++]
[date of first appearance follows]
*********************************
17 - 19 January 2006. 3rd Old World Conference in Phonology (OCP3). Budapest, Hungary. <http://nytud.hu/ocp3> (08/05)
21 January 2006. Recherches Actuelles en Phonétique et Phonologie. Paris, France. <http://www.atala.org/article.php3?id_article=278> (11/05)
++ 31 March - 1 April 2006. The 13th Conference on Oral English, organized by ALOES, the LSHS Department and the Laboratoire de Linguistique informatique (University of Paris 13). Villetaneuse, France. [log in to unmask] (01/06) (further details below).
9 - 11 April 2006. Multilingual Speech and Language Processing (Multiling 2006). ISCA Tutorial and Research Workshop (ITRW). Stellenbosch University, South Africa. <http://www.unistel.co.za/multiling2006> (07/05, 08/05)
10 - 12 April 2006. Meeting of the British Association of Academic Phoneticians (BAAP 2006). Edinburgh. <http://www.qmuc.ac.uk/ssrc> (09/05)
20 - 22 April 2006. 1st Slovene International Phonetic Conference (SloFon 1). Ljubljana, Slovenia. <http://slofon.zrc-sazu.si/> (05/05)
2 - 5 May 2006. Speech Prosody 2006. International Congress Center, Dresden, Germany. <http://www.ias.et.tu-dresden.de/sp2006/> (08/05)
10 - 14 May 2006. International Conference on Conversation Analysis (ICCA). University of Helsinki, Finland. <http://www.helsinki.fi/hum/skl/icca/> (12/03)
14-19 May 2006. 31st International Conference on Acoustics, Speech, and Signal Processing (ICASSP). Centre des Congrès Pierre Baudis, Toulouse, France. <http://www.icassp2006.org/default.asp> (12/05)
20 May 2006. Speech Recognition and Intrinsic Variation Workshop. (A satellite workshop of ICASSP2006; see above). Toulouse, France. http://www.icassp2006.org/default.asp
21 - 25 May 2006. Speech Pathology Australia National Conference: Frontiers. Fremantle, Western Australia. <http://www.speechpathologyaustralia.org.au/Content.aspx?p=62> (08/05)
++ 23 May 2006. International Workshop on EMOTION: CORPORA FOR RESEARCH ON EMOTION AND AFFECT (in Association 5th INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION, Genoa - Italy.
[log in to unmask] (01/06) (further details below)
++ 25 - 27 May 2006. 14th Manchester Phonology Meeting. Manchester, UK
www.englang.ed.ac.uk/mfm/14mfm.html (01/06) (further details below)
++ 27 May 2006. Workshop on "Quality assurance and quality measurement for language and speech resources" (in conjunction with LREC 2006, The 5th INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION. Genoa, Italy. http://utrecht.elsnet.org/lrec2006qa (01/06) (further details below)
31 May - 3 June 2006. 11th Symposium International Clinical Phonetics and Linguistics Association (ICPLA11). Dubrovnik, Croatia. <http://icpla.ffzg.hr/> (07/05)
7 - 10 June 2006. 5th International Conference on Speech Motor Control Nijmegen, Netherlands. <http://www.slp-nijmegen.nl/smc2006/> (11/05)
9 - 13 June 2006. 2nd Biannual Russian Conference on Cognitive Science. St. Petersburg, Russia. http://www.cogsci.ru/cogsci06/firstcallen.htm; <[log in to unmask]> (06/05)
19-29 June 2006. Phonology Fest 2006. Bloomington, Indiana, USA. <http://www.indiana.edu/~phono/> (12/05)
26 - 29 June 2006. SPECOM, St. Petersburg, Russia. <http://www.specom.nw.ru/>; <[log in to unmask]> (10/05)
29 June - 1 July 2006. 10th Conference on Laboratory Phonology (LabPhon10). Paris, France. <http://www.lpl.univ-aix.fr/~labphon/> (07/05)
6 - 8 July 2006. Approches Phonologiques et Prosodiques de la Variation Sociolinguistique: le Cas du Francais (PFC 2006). <http://pfc2006.fltr.ucl.ac.be/> (09/05)
22-24 August 2006. SSW-6: 6th ISCA Speech Synthesis Research Workshop. Bonn, Germany.
28-30 August 2006. ISCA Tutorial and Research Workshop on Experimental Linguistics. Athens, Greece. <http://www.phil.uoa.gr/~abotinis> (12/05)
7 - 9 September 2006. 2nd TIE Conference on the Typology of Tone and Intonation (TTI). Berlin, Germany. http://linguistlist.org/issues/16/16-1263.html#1 <http://linguistlist.org/issues/16/16-1263.html>; <http://www.zas.gwz-berlin.de/index.html?events_tie2> (05/05)
8 - 10 September 20006. Conference on Laboratory Approaches to Spanish Phonology. Toronto, Canada. <http://www.chass.utoronto.ca/spanish_portuguese/phonology/> (11/05)
16 September 2006. SAPA 2006 <http://www.sapa2006.org/> ISCA Tutorial and research Workshop on Statistical and Perceptual Audition. (A satellite workshop of Interspeech 2006-ICSLP). Pittsburgh, PA, USA. <http://www.sapa2006.org/> (12/05)
17 - 22 September 2006. Interspeech'2006 - ICSLP (International Conference on Spoken Language Processing). Pittsburgh, Pennsylvania, USA. <http://www.interspeech2006.org/>/ (12/04)
13-16 December 2006. ISCSLP 2006: 5th International symposium on Chinese Spoken Language Processing. Singapore. <http://www.iscslp2006.org/> (12/05)
*************************************
CONFERENCES & WORKSHOPS
*************************************
Fourteenth Manchester Phonology Meeting
25-27 MAY 2005
Deadline for abstracts: 20th February 2006
Special session: 'Fieldwork and phonological theory' (featuring Dan Everett, Larry Hyman, Keren Rice)
Held in Manchester, UK; organised through a collaboration of phonologists at the University of Edinburgh, the University of Manchester, the Universite Toulouse-Le Mirail, the Universite Montpellier-Paul Valery and elsewhere.
Conference website: www.englang.ed.ac.uk/mfm/14mfm.html
BACKGROUND
We are pleased to announce our Fourteenth Manchester Phonology Meeting (14mfm). The mfm is the UK's annual phonology conference, with an international set of organisers; it is held in late May every year in Manchester. The meeting has become a key conference for phonologists from all corners of the world, where anyone who declares themselves to be interested in phonology can submit an abstract on anything phonological in any phonological framework. In an informal atmosphere, we discuss a wide range of topics, including the phonological description of a wide variety of languages, issues in phonological theory, aspects of phonological acquisition and implications of phonological change.
SPECIAL SESSION
There is no conference theme - abstracts can be submitted on anything, but, following the success of such sessions in previous years, a special themed session has been organised, entitled 'Fieldwork and phonological theory'.
This will feature invited speakers and conclude in an open discussion session when contributions from the audience will be very welcome. Abstracts which attempt to deal overtly with the issues involved with this (from any
perspective) are certainly welcome.
SPECIAL SESSION SPEAKERS (in alphabetical order)
* Dan Everett (University of Manchester, UK)
* Larry Hyman (University of California, Berkeley, USA)
* Keren Rice (University of Toronto, Canada)
ABSTRACT SUBMISSION
**This is a summary - please consult the website for full details**
*www.englang.ed.ac.uk/mfm/14mfm.html*
* There is no obligatory conference theme - abstracts can be submitted on anything. Abstracts should be sent to Patrick Honeybone as attachments to an email ([log in to unmask]) by 20th February 2006.
* Abstracts should be no longer than one side of A4, with 2.5cm or one inch margins, single-spaced, with a font size no smaller than 12, and with normal character spacing.
* Please send two copies of your abstract - one of these should be anonymous and one should include your name, affiliation and email address at the top of the page, directly below the title. All abstracts will be reviewed anonymously by members of the organising committee and advisory board.
* Please use one of these formats for your abstract: pdf, Word, or plain text. If you need to use a phonetic font in your abstract, either embed it in a pdf file, or use the SIL Doulos IPA93 font, which can be downloaded for free from this site: http://scripts.sil.org/cms/scripts/page.php?
site_id=nrsi&id=encore-ipa.
* Full papers will last around 25 minutes with around 10 minutes for questions, and there will be a high-profile poster session lasting one and a half hours. Please indicate whether you would prefer to present your work as an oral paper or a poster, or whether you would be prepared to present it in either form.
* If you need technical equipment for your talk, please say so in the message accompanying your abstract and we will do our best to provide it, although this cannot be guaranteed.
* We aim to finalise the programme, and to contact abstract-senders by around 20th March. At present, there are no plans for publishing the general proceedings of the Meeting. We would like to keep the mfm as an informal forum where speakers can air new ideas which are still in the early stages of development.
**Further important details** concerning abstract submission are available on the conference website - please make sure that you consult these before submitting an abstract: www.englang.ed.ac.uk/mfm/14mfm.html
ORGANISERS
Organising Committee:
The first named is the convenor and main organiser - if you would like to attend or if you have any queries about the conference, please feel free to get in touch with me ([log in to unmask], or phone +44 (0)131 651 1838).
* Patrick Honeybone (Edinburgh)
* Ricardo Bermudez-Otero (Manchester)
* Wiebke Brockhaus-Grand (Manchester)
* Philip Carr (Montpellier-Paul Valery)
* Jacques Durand (Toulouse-Le Mirail)
Advisory Board:
* Jill Beckman (Iowa)
* Mike Davenport (Durham)
* Daniel L. Everett (Manchester)
* Paul Foulkes (York)
* S.J. Hannahs (Newcastle upon Tyne)
* John Harris (UCL)
* Kristine A. Hildebrandt (Manchester)
* Martin Krämer (Tromso)
* Ken Lodge (UEA)
* April McMahon (Edinburgh)
* Marc van Oostendorp (Meertens Instituut)
* Glyne Piggott (McGill)
* Curt Rice (Tromso)
* Catherine O. Ringen (Iowa)
* Tobias Scheer (Nice)
* James M Scobbie (QMUC)
* Dan Silverman
* Marilyn M. Vihman (Bangor)
* Moira Yip (UCL)
--------------------------------------------------------
FIRST CALL FOR PAPERS
Workshop
"Quality assurance and quality measurement for
language and speech resources"
on Saturday, May 27th 2006, in conjunction with
LREC 2006, The 5th INTERNATIONAL CONFERENCE ON
LANGUAGE RESOURCES AND EVALUATION
Genoa, Italy, 24-26 May 2006
Workshop description:
=====================
The workshop aims at
- bringing together experience with and insights in quality
assurance and measurement for language and speech resources in
a broad sense (including multimodal resources, annotations,
tools, etc),
- covering both qualitative and quantative aspects,
- identifying the main tools and strategies,
- analysing the strengths and weaknesses of current practice,
- establishing what can be seen as current best practice,
- reflecting on trends and future needs.
It can be seen as a follow-up of the workshop on speech resources that took place at LREC 2004, but the scope is wider as we include both language and speech resources. We feel that there is a lot to be gained by bringing these communities together, if only because the speech community seems to have a longer tradition in resources evaluation than the written language community.
Relevance:
=========
Quality assurance is an important concern for both the provider, the distributor and the user of language and speech resources.
The concept of quality is only meaningful if both the producer and the user of the resources can rely on the same set of quality criteria, and if there are effective procedures to check whether these criteria are met. The universe of possible types of language resources is huge and evolves over time, and there is no universal set of qualitative or quantitative criteria and tests that can be applied to all sorts of resources.
In this workshop we will try to investigate what sorts of criteria, tests and measures are being used by providers, users and distribution agencies such as ELRA and LDC, and we will try to distill from this current practice general recommendations for quality assurance and measurement for language and speech resources,
The workshop will look at quality assurance and quality measures both from the provider, the distributor and the user point of view, and will explicitly address special problems in connection with very large corpora, including numerical measures, comparison of corpora, exchange formats, etc.
Format:
======
The workshop will be a full-day event, and will include
(1) invited presentations (25+5 minutes) from data providers,
distributors, or validators who are working on the basis of
an explicit QA framework
(2) submitted papers (15+5 minutes) by others who can report on
relevant QA experience in the production, validation or use
of resources
(3) a round-table discussion aiming at establishing best practice
Papers:
======
We invite papers that
- describe or critically analyze existing quality measures
used to compare or validate resources
- describe or critically analyze existing quality assurance
practices in resources production
- describe or critically analyze existing approaches to
quality validation or measurement of third party resources
- describe future directions aimed at improving quality assurance,
validation and measurement procedures for language and speech resources
Timetable:
=========
- Paper submission deadline: Feb 17, 2006
- Notification of acceptance: March 10, 2006
- Final version of paper: April 10, 2006
- Workshop: May 27, 2006 (full day)
Submissions:
===========
Abstracts should be in English, and up to 4 pages long.
Submission format is PDF.
Papers will be reviewed by at least 3 members of the scientific committee. The reviews are NOT anonymous.
Accepted papers are up to 6 pages long, and should be submitted in the format specified for the proceedings by the LREC organisers. The URL will be published on the Workshop Site (see below).
Submissions should be sent to [log in to unmask]
Workshop and core scientific committee:
======================================
Co-chairs:
- Steven Krauwer (UU/ELSNET, [log in to unmask])
- Uwe Quasthoff (Leipzig, [log in to unmask])
Members:
- Simo Goddijn (INL, [log in to unmask])
- Jan Odijk (ELRA/Scansoft/UU, [log in to unmask])
- Khalid Choukri (ELDA, [log in to unmask])
- Nicoletta Calzolari (ILC-CNR/WRITE, [log in to unmask])
- Bente Maegaard (CST, [log in to unmask])
- Chris Cieri (LDC, [log in to unmask])
- Chu-ren Huang (Ac Sin, [log in to unmask])
- Takenobu Tokunaga (TIT, [log in to unmask])
- Harald Hoege (Siemens, [log in to unmask])
- Henk van den Heuvel (CLST/SPEX, [log in to unmask])
- Dafydd Gibbon (Bielefeld, [log in to unmask])
- Key-Sun.Choi (KORTERM, [log in to unmask])
- Jorg Asmussen, (DSL, [log in to unmask])
Scientific committee:
====================
We will include other experts as needed for the review process or for the completion of the programme.
Main contact and further info:
=============================
- Contact: Steven Krauwer, [log in to unmask]
- Workshop URL: http://utrecht.elsnet.org/lrec2006qa
- Conference URL: http://www.lrec-conf.org/lrec2006
This workshop is supported by ELSNET and WRITE (the international coordination committee for written language resources and evaluation).
------------------------------------------------
5th International Conference on Speech Motor Control
Final Call for Papers and a message for EMA users.....
##################################################
The 5th edition of the International Conference on Speech Motor Control will be held June 7-10, 2006 in Nijmegen, the Netherlands. Upon request, the deadline for submission of abstracts has been shifted to January 9, 2006.
For an update of the program and abstract submission see our website:
http://www.slp-nijmegen.nl/smc2006/
Ben Maassen
Pascal van Lieshout
PLUS... A MESSAGE FROM PHIL HOOLE.....
Dear EMA users
There are plans afoot to use the upcoming Speech Motor Control conference in Nijmegen as an opportunity for EMA users to get together.
The precise format has not yet been finalized but most likely there will be a couple of oral papers on EMA developments, plus some kind of workshop/demo session with opportunity for discussion.
The conference organizers recently extended the deadline for abstract submission to Jan. 9th. Since I imagine many of you will find it easier to raise travel funds for attendance if you are presenting a regular paper at the conference (i.e on any topic relevant to the conference, not necessarily EMA-related) there is still (just) time to submit.
All the best for 2006
Phil
------------------------------------
Second Call for Papers
13th Conference on Oral English
The 13th Conference on Oral English, organized by ALOES, the LSHS Department and the Laboratoire de Linguistique informatique (University of Paris 13) will take place at Villetaneuse on:
Friday 31 March Saturday 1 April 2006.
The guest of honour will be Professor Carlos Gussenhoven (Queen Mary College, University of London).
Theme of the conference: "Stress and rhythm revisited".
Conference committee
Viviane Arigne (Université Paris 13), Nicolas Ballier (Université Paris 13), Alain Deschamps (ALOES), Ruth Huart (ALOES), Christiane Migette (Université Paris 13), Michael O'Neil (ALOES) and Jennifer Vince (ALOES).
Papers
The first day (at least) will be devoted to papers on the conference theme "Stress and rhythm revisited". However papers not directly relevant to the theme will also be considered. Each presentation should last 30 minutes and will be followed by 10 minutes discussion. Proposals (200-300 words) should be sent to the Committee before 1 February 2006 at the following address:
Ruth Huart,
Secrétariat de l'ALOES
Université Paris 7
4 rue de la Gare
62128 Ecoust-St-Mein
[log in to unmask]
N.B. Please specify if any special equipment is required.
Further information
A third circular, together with the Conference programme, enrolment form and practical details, will be sent out in February 2006. Please note the following points:
Enrolment fee: 24 euros (ALOES members: 16 euros ; students: 8 euros )
On the Friday evening, a dinner will be organized in a Paris restaurant.
Questions concerning the organisation of the conference should be addressed
to:
[log in to unmask]
On behalf of the organizing committee :
V. Arigne N. BAllier C. Migette
*****************************************************************
First Call For Papers
International Workshop on EMOTION:
CORPORA FOR RESEARCH ON EMOTION AND AFFECT
23 May 2006
half day workshop: afternoon session
In Association with
5th INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION
LREC2006 http://www.lrec-conf.org/lrec2006/
Main Conference
24-25-26 May 2006
Magazzini del Cotone Conference Center
Genoa - Italy
------------------------------------------------
Summary of the call for participation
------------------------------------------------
Papers are invited in the area of corpora for research on emotion and affect. They may cover one or more of the following topics. What kind of theory of emotion is needed to guide the area? What are appropriate sources? Which modalities should be considered, in which combinations? What are the realistic constraints on recording quality? How can the emotional content of episodes be described within a corpus? Which emotion-related features should a corpus describe, and how? How should access to corpora be provided? What level of standardisation is appropriate? How can quality be assessed? Ethical issues in database development and access.The organisers of this workshop proposal are members of the Humaine NoE, and have a main role in its databases WP.
They will be able to reach researchers in databases of emotions both through the resulting contacts and through the HUMAINE portal.
--------------------
MOTIVATIONS
--------------------
This decade has seen an upsurge of interest in systems that register emotion (in a broad sense) and react appropriately to it. Emotion corpora are fundamental both to developing sound conceptual analyses and to training these 'emotion-oriented systems' at all levels - to recognise user emotion, to express appropriate emotions, to anticipate how a user in one state might respond to a possible kind of reaction from the machine, etc. Corpora have only begun to grow with the area, and much work is needed before they provide a sound foundation.
The HUMAINE network of excellence (http://emotion-research.net/) has brought together several groups working on the development of databases, and the workshop aims to broaden the interaction that has developed in that context.
Papers are expected to address some of the following areas of concern.
Many models of emotion are common enough to affect the way teams go about collecting and describing emotion-related data. Some which are familiar and intuitively appealing are known to be problematic, either because they are theoretically dated or because they do not transfer to practical contexts. To evaluate the resources that are already available, and to construct valid new corpora, research teams need some sense of the models that are relevant to the area.
What are appropriate sources?
In the area of emotion, some of the hardest problems involve acquiring basic data. Four main types of source are commonly used. Their potential contributions and limitations need to be understood.
Acted:
Many widely used emotion databases consist of acted representations of emotion (which may or may not be generated by actors). The method is extremely convenient, but it is known that systems trained on acted material may not transfer to natural emotion. It has to be established what kind of acted material is useful for what purposes.
Application-driven:
A growing range of databases are derived from specific applications (eg call centres). These are ideal for some purposes, but access is often restricted for commercial reasons, and it is highly desirable to have more generic material that could underpin work on a wide range of applications.
General naturalistic:
Data that is representative of everyday life is an attractive ideal, but very difficult to collect. Making special-purpose recordings of everyday life is a massive task, with the risk that recording changes behaviour. Several teams have used material from broadcasts, radio & TV (talk shows, current affairs). That raises issues of access, signal quality, and genuineness.
Induction:
A natural ideal is to induce emotion of appropriate kinds under appropriate circumstances. Satisfying induction is an elusive ideal, but new techniques are gradually emerging.
Which modalities should be considered, in which combinations?
Emotion is reflected in multiple channels - linguistic content, paralinguistic expression, facial expression, eye movement, gesture, gross body movement, manner of action, visceral changes (heart rate, etc), brain states (eeg activity, etc). The obvious ideal is to cover all simultaneously, but that is impractical - and it is not clear how often all the channels are actually active. The community needs to clarify the relative usefulness of the channels, and of strategies for sampling combinations.
What are the realistic constraints on recording quality?
Naturalism tends to be at odds with ease of signal processing.
Understanding of the relevant tradeoffs needs to be reached. That includes awareness of different applications (high quality may not be crucial for defining the expressive behaviours a virtual agent should show) and of timescale for solving particular signal processing issues (eg recovering features from images of heads in arbitrary poses).
How can the emotional content of episodes be described within a corpus?
Several broad approaches exist to transcribing the emotional content of an excerpt - using everyday emotion words; using dimensional descriptions rooted in psychological theory (intensity, evaluation, activation, power); using concepts from appraisal theory (perceived goal-conduciveness of a development, potential for coping, etc). These are being developed in specific ways driven by goals such as elegance, inter-rater reliability, faithfulness to the subtlety of everyday emotion, relevance to agent decisions, etc. There seems to be a real prospect of achieving an agreed synthesis of the main schemes.
Which emotion-related features should a corpus describe, and how?
Corresponding to each emotion-related channel is one or more sets of signs relevant to conveying emotion. For instance, paralinguistic signs exist at the level of basic contours - F0, intensity, formant-related properties, and so on; at the level of linguistic features of prosody (such as 'tones and breaks' in TOBI); and at more global levels (tune shapes, repetitions, etc). Even for speech, inventories of relevant signs need to be developed, and for channels such as idle body movements, few descriptive systems have been proposed.
Few teams have the expertise to annotate many types of sign competently, and so it is important to establish ways of allowing teams that do have the expertise to make their annotations available as part of a database.
Mainly for lower level features, automatic transcription methods exist, and their role needs to be clarified. In particular, tests of their reliability are needed, and that depends on data that can serve as a reference.
How should access to corpora be provided?
Practically, it is clearly important to find ways of establishing a sustainable and easily expandable multi-modal database for any sorts of emotion-related data; to develop tools for easily importing and exporting data; to develop analysis tools and application programmers interfaces to work on the stored data and meta-data; and to provide ready access to existing data from previous projects. Approaches to those goals need to be defined.
What level of standardisation is appropriate?
Standardisation is clearly desirable in the long term, but with so many basic issues unresolved, it is not clear where real consensus can be achieved and where it is better to encourage competition among different options.
How can quality be assessed?
It is clear that some existing corpora should not be used for serious research. The problem is to develop quality assurance procedures that can direct potential users toward those which can.
Ethical issues in database development and access
Corpora that show people behaving emotionally are very likely to raise ethical issues - not simply about signed release forms, but about the impact of appearing in a public forum talking (for instance) about topics that distress or excite them. Adequate guidelines need to be developed.
All of the questions above will be studied during the workshop and will contribute to the study of practical, methodological and technical issues central to developing emotional corpora(such as the methodologies to be used for emotional database creation, the coding schemes to be defined, the technical settings to be used for the collection, the selection of appropriate coders).
Papers are invited in the area of corpora for research on emotion and affect. They may cover one or more of the following topics. What kind of theory of emotion is needed to guide the area? What are appropriate sources? Which modalities should be considered, in which combinations? What are the realistic constraints on recording quality? How can the emotional content of episodes be described within a corpus? Which emotion-related features should a corpus describe, and how? How should access to corpora be provided? What level of standardisation is appropriate? How can quality be assessed? Ethical issues in database development and access
The organisers of this workshop proposal are members of the Humaine NoE, and have a main role in its databases WP. They will be able to reach researchers in databases of emotions both through the resulting contacts and through the HUMAINE portal.
-------------------------
IMPORTANT DATES
-------------------------
1rt call for paper 16 December
2nd call for paper 3 January
Deadline for 1000 words abstract submission 17 February
Notification of acceptance 10 March
Final version of accepted paper 10 April
Workshop half-day 23 May
--------------
SUBMISSIONS
---------------
The workshop will consist of paper and poster presentations.
Final submissions should be 4 pages long, must be in English, and follow the submission guidelines at http://www.chi2006.org/docs/chi2006pubsformat.doc
The preferred format is MS word.
The .doc file should be submitted via email to [log in to unmask]
---------------------
As soon as possible, authors are encouraged to send to [log in to unmask] a brief email indicating their intention to participate, including their contact information and the topic they intend to address in their submissions.
Proceedings of the workshop will be printed by the LREC Local Organising Committee.
Submitted papers will be blind reviewed.
--------------------------------------------------
TIME SCHEDULE AND REGISTRATION FEE
--------------------------------------------------
The workshop will consist of an afternoon session, There will be time for collective discussions.
For this half-day Workshop, the registration fee will be specified on http://www.lrec-conf.org/lrec2006/
---------------------------
THE ORGANISING COMMITTEE
----------------------------
Laurence Devillers / Jean-Claude Martin
Spoken Language Processing group, LIMSI-CNRS, BP 133, 91403 Orsay Cedex, France
(+33) 1 69 85 80 62 / (+33) 1 69 85 81 04 (phone)
(+33) 1 69 85 80 88 / (+33) 1 69 85 80 88 (fax) [log in to unmask] / [log in to unmask] http://www.limsi.fr/Individu/devil/
http://www.limsi.fr/Individu/martin/
Roddy Cowie / School of Psychology
Ellen Douglas-Cowie / Dean of Arts, Humanities and Social Sciences Queen's University, Belfast BT7 1NN, UK
+44 2890 974354 / +44 2890 975348 (phone)
+44 2890 664144 / +44 2890 ****** (fax)
http://www.psych.qub.ac.uk/staff/teaching/cowie/index.aspx
http://www.qub.ac.uk/en/staff/douglas-cowie/
[log in to unmask] / [log in to unmask]
Anton Batliner - Lehrstuhl fuer Mustererkennung (Informatik 5) Universitaet Erlangen-Nuernberg - Martensstrasse 3
91058 Erlangen - F.R. of Germany
Tel.: +49 9131 85 27823 - Fax.: +49 9131 303811 [log in to unmask]
http://www5.informatik.uni-erlangen.de/Personen/batliner/
----------------------------------
PROGRAM COMMITTEE
----------------------------------
Roddy Cowie, QUB, UK
Ellen Douglas-Cowie, QUB, UK
Laurence Devillers, LIMSI-CNRS, FR
Jean-Claude Martin, LIMSI-CNRS, FR
Anton Batliner, Univ. Erlangen, D
Nick Campbell, ATR, J
Elisabeth André, Univ. Augsburg, D
Stephanos Kollias, ICCS, G
Marc Schröder, DFKI Saarbrücken, D
Catherine Pelachaud, Univ. Paris VIII, FR Elisabeth Shriberg, SRI and ICSI, USA Izhak Shafran, Univ. Johns Hopkins, CSLP, USA Ioana Vasilescu, ENST, FR Fiorella de Rosis, Univ. Bari, I Isabella Poggi, Univ. Roma Tre, I Nadia Bianchi-Berthouze, Univ. Aizu, J Susanne Kaiser, UNIGE, S Valérie Maffiolo, FranceTelecom, FR Véronique Aubergé, CNRS-STIC, FR Shrikanth Narayanan, USC Viterbi School of Engineering, USA John Hansen,Univ. of Texas at Dallas, USA Christine Lisetti, EURECOM, FR
*****************************
POSITIONS AVAILABLE
*****************************
Stanford University
The Department of Linguistics at Stanford University invites applications for a visiting position in phonetics at the rank of Lecturer for the
2006-2007 Academic Year. The Lecturer will be expected to teach four quarter-length courses at both the graduate and undergraduate levels. Two of the courses will be in phonetics; the other two courses should either contribute to the phonetics/phonology curriculum more broadly, or complement the department's current offerings. The Lecturer will also be expected to help in the management of the Department's Phonetics Laboratory.
Applicants must hold the Ph.D. in Linguistics or related field by September 1, 2006. Stanford University is an equal opportunity, affirmative action employer.
To receive full consideration, HARD-COPY applications should arrive by January 30, 2006. Applications should include the candidate's CV, statements of teaching and research interests, teaching evaluations (if available), up to three representative samples of published or unpublished work, and the names and e-mail addresses of three or four referees.
Applicants should also arrange for their letters of reference to be sent directly to the Department.
Send hard-copy materials to:
Lecturer Search Committee
Stanford University
Department of Linguistics
Margaret Jacks Hall, Room 127
Stanford, CA 94305-2150 USA
(Tel: 650-723-4284; Fax: 650-723-5666)
For further information, please contact Professor Arto Anttila at [log in to unmask]
The Department's web page is: http://www-linguistics.stanford.edu.
Application Address: Nikhila Pai , Department Manager Linguistics Department Margaret Jacks Hall, Building 460
Stanford CA 94305-2150
USA
------------------------------------------------
Title: TTS Researcher (Speech Segmentation)
Fields: Speech Synthesis, Text-to-Speech
Speech Segmentation, Speech Processing
Application Deadline: open until filled (cf. www.svox.com)
Employer: SVOX AG in Zurich, Switzerland
Job Description:
SVOX AG is a dynamic technology company specialized in text-to-speech and a leading provider of speech output solutions for the automotive and mobile devices industries. We are seeking to (temporarily or permanently) add to our research and development team a TTS researcher with a proven track record in speech segmentation/alignment for TTS.
You will create new tools, algorithms and approaches for use with our state-of-the-art text-to-speech engine and drive improvements at all levels of the SVOX voice development process.
We invite applications from senior researchers and developers in the TTS field as well as from promising new talents. As a successful leader in the embedded text-to-speech market, SVOX offers highly competitive salaries and benefits. SVOX provides a varied, full-time position in an international, fast-paced, results-oriented environment.
Tasks:
- Drive innovation in speech segmentation for TTS:
- implement and optimize tools for automatic alignment of phonetic
and prosodic events with studio-recorded speech
- prepare speech and transcription material for segmentation
- minimize manual steps in favor of a fully automatic segmentation
- evaluate and control the effect of segmentation accuracy on TTS
output
- Contribute and implement new algorithms for all levels of TTS
- Interface with text analysis and language specialists, software
developers, and sales
- Documentation and communication of results
Qualifications and required skills:
- A degree (PhD or MSc) in computer science or electrical engineering
and 5+ years experience in speech segmentation and TTS
- Outstanding programming skills in
- speech segmentation and digital signal processing (e.g. Matlab, C)
- natural language processing (e.g. Prolog)
- script writing (e.g. Tcl, Python)
- Experience in text-to-speech technology and unit selection on
low-memory platforms
Personal requirements:
- Complete written and spoken fluency in English
- Excellent organization and communication skills
- High comfort level interacting with linguists, engineers, and sales
- Strong commitment to the job
- High quality standards
- Willingness to relocate to Zurich, Switzerland
E-mail contact: [log in to unmask]
Address: SVOX AG, HR
Baslerstrasse 30
CH-8048 Zurich
Switzerland
*******************************
Special issue of Speech Communication: Call for papers
*******************************
CALL FOR PAPERS for special issue of Speech Communication
Special Issue on Bridging the Gap Between Human and Automatic Speech Processing
In the past decade the performance of speech processing systems has improved dramatically, resulting in the widespread use of speech technology in many real-world applications. State-of-the-art large vocabulary speech recognition systems are capable of achieving word error rates of 10-15% or lower on data of the most difficult type (natural, conversational speech). To a large extent, these achievements are due to the use of unprecedented amounts of training data. This has led many speech researchers to believe that future improvements can be obtained by ever-increasing amounts of training data, as well as efficient techniques for utilizing these resources.
On the other hand, speech processing performance still falls short of human performance in many ways, particularly with respect to robustness to mismatches in training and test conditions. An entirely alternative view of the current state of speech recognition research is that the remaining limitations of speech processing can only be solved by radical modifications to the standard paradigm and, in particular, by integrating more knowledge about crucial characteristics of human speech perception.
This alternative is particularly interesting since the field of human speech perception research is increasingly utilizing computational and statistical techniques, some of which are inspired by the statistical matching techniques use in the ASR paradigm.
This special issue of Speech Communication is entirely devoted to studies that seek to bridge the gap between human and automatic speech recognition. It follows the special session at INTERSPEECH 2005 on the same topic.
Papers are invited that cover one or several of the following issues:
- quantitative comparisons of human and automatic speech processing
capabilities, especially under varying environmental conditions
- computational approaches to modelling human speech perception
- use of automatic speech processing as an experimental tool in
human speech perception research
- speech perception/production-inspired modelling approaches for speech
recognition, speaker/language recognition, speaker tracking,
sound source separation
- use of perceptually motivated models for providing rich
transcriptions of speech signals (i.e. annotations going beyond
the word, such as emotion, attitude, speaker characteristics, etc.)
- fine phonetic details: how should we envisage the design and evaluation
of computational models of the relation between fine phonetic details in the
signal on the one hand, and key effects in (human) speech processing on the
other hand.
- how can advanced detectors for articulatory-phonetic features be integrated
in the computational models for human speech processing
- the influence of speaker recognition on speech processing
Guest Editors:
Katrin Kirchhoff Louis ten Bosch
Department of Electrical Engineering Dept. of Language and Speech
University of Washington Radboud University Nijmegen
Box 352500 Post Box 9103
Seattle, WA, 98195 6500 HD Nijmegen
(206) 616-5494 +31 24 3616069
[log in to unmask] [log in to unmask]
Important Dates:
Submission Deadline: April 30, 2006
Notification of Acceptance: August 30, 2006 Final Manuscript Due: September 30, 2006 Tentative Publication Date: December 2006
Submission Procedure:
Prospective authors should follow the regular guidelines of the Speech Communication Journal for electronic submission (http://ees.elsevier.com/specom). Authors must select the Section:
Special Issue Paper: Bridging the Gap... , and not Regular Paper and request Professor Julia Hirschberg as Managing Editor for the paper.
*********************************************************************
Items for the next issue of foNETiks should reach us by 28 January 2006.
*********************************************************************
|