******************************************** foNETiks A network newsletter for the International Phonetic Association and for the Phonetic Sciences December 2002 ******************************************** Editors: Linda Shockey, University of Reading, UK <[log in to unmask]> Gerry Docherty, University of Newcastle, UK <[log in to unmask]> Paul Foulkes, University of York, UK <[log in to unmask]> Lisa Lim, National University of Singapore <[log in to unmask]> E-mail address: [log in to unmask] The foNETiks archive can be found on the WWW at: http://www.jiscmail.ac.uk/lists/fonetiks.html Visit the IPA web page at: http://www.arts.gla.ac.uk/IPA/ipa.html **************************************************************** THE SEASON'S WARMEST GREETINGS FROM THE EDITORS TO ALL READERS **************************************************************** ********** IPA NEWS ********** Election of the Association's Council 2003-2007 & Election to the Permanent Council for the Organisation of the International Congresses of Phonetic Science 2003-2011 IPA members: Please return your ballot paper to the Secretariat by 3 December 2002. John H. Esling, Secretary <[log in to unmask]> ************************************ ANNOUNCEMENTS [new ones marked ++] [date of first appearance follows] ************************************ 2 - 5 December 2002. 9th Australian International Conference on Speech Science & Technology 2002. University of Melbourne, Australia. http://www.conferences.unimelb.edu.au/SST/ (07/02) 2 - 6 December 2002. Joint Meeting: 144th Meeting of the Acoustical Society of America, 3rd Iberoamerican Acoustics and 9th Mexican Congress on Acoustics. Cancun, Mexico. http://asa.aip.org/cancun.html (12/00) 5 - 7 December 2002. 2nd workshop on sociolinguistic, phonetic and phonological characteristics of /r/. Universite Libre de Bruxelles, Belgium. http://www.ulb.ac.be/philo/phonolab/r-atics2/r-atics2.htm; <[log in to unmask]> (10/02) ++ 9 - 11 December 2002. 2002 International Workshop on Multimedia Signal Processing. St. Thomas, US Virgin Islands. http://mmsp02.njit.edu/ (12/02) 9 - 11 January 2003. Old World Conference in Phonology I (Segmental Phonology). Leiden University, Leiden, The Netherlands. http://www.let.leidenuniv.nl/ulcl/events/ocp1/ (06/02) ++ 9 - 11January 2003. Workshop on Spoken Language Processing. Tata Institute of Fundamental Research, Mumbai, India. http://speech.tifr.res.in/wslp/ (12/02) 21 - 24 January 2003. 8th International Symposium on Social Communication. Centro de Linguistica Aplicada, Santiago de Cuba, Cuba. <[log in to unmask]> (03/02) 7 - 9 March 2003. Texas Linguistics Society. Theme: The Dynamics of Coarticulation in Speech Production and Perception. The University of Texas at Austin, USA. http://uts.cc.utexas.edu/~tls/ (10/02) 27 - 29 March 2003. International Colloquium on Prosodic Interfaces. Nantes, France. http://www.lettres.univ-nantes.fr/infos/ip2003/IP2003GB.html; <[log in to unmask]> (06/02) ++ 4 - 5 April 2003. ISCA (International Speech Communication Association) Tutorial and Research Workshop on Multilingual Spoken Document Retrieval. Macau. (prior to ICASSP2003 in Hong Kong; see next entry) http://www.se.cuhk.edu.hk/MSDR/ (10/02) ++ 6 - 10 April 2003. The 28th International Conference on Acoustics, Speech, and Signal Processing (ICASSP). Convention & Exhibition Center, Hong Kong. http://www.eie.polyu.edu.hk/~icassp03/ (12/02) 7 - 11 April 2003. Ecole thmatique de phonologie et de phontique. Ile de Porquerolles, France. http://www.lpl.univ-aix.fr/~etpp03; <[log in to unmask]> (10/02) ++ 14 - 16 April 2003. ISCA/IEEE Workshop on Spontaneous Speech Processing and Recognition. Tokyo, Japan. (after ICASSP2003 in Hong Kong; see entry above) http://www.sspr2003.com/index.html (12/02) 18 - 20 April 2003. Phonetics Today: IV International Conference. Moscow, Russia. <[log in to unmask]> (11/02) ++ 20 - 23 May 2003. NOLISP 03: ISCA (International Speech Communication Association) Tutorial and Research Workshop on Non-Linear Speech Processing. Le Croisic, France. <[log in to unmask]> (12/02) ++ 19 - 21 June 2003. Theoretical and Experimental Neuropsychology (TENNET XIV). Universit du Qubec, Montreal, Canada. http://www.tennet.ca; <[log in to unmask]> (12/02) [further details below] 1 - 4 July 2003. Child Phonology Conference. UBC, Vancouver. <[log in to unmask]>; <[log in to unmask]> (09/02; 12/02) [further details below] 7 - 11 July 2003. Headhood, Contrastivity and Specification; and Representations to Constraints and from Constraints to Representations. Universit de Toulouse-Le Mirail, Toulouse, France. <[log in to unmask]> (11/02) 3 August 2003. Intonation in language varieties - AM approaches. Satellite workshop at ICPhS2003 [see next entry]. Universitat de Barcelona, Barcelona, Spain. http://www.vuw.ac.nz/lals/icphs/intonation_in_varieties.html (10/02) 3 - 9 August 2003. ICPhS2003: The 15th International Congress of Phonetic Sciences. Palau de Congressos, Plaa d'Espanya, Barcelona, Spain. http://shylock.uab.es/icphs/ (08/01) ++ 6 - 8 August 2003. 2nd International Conference on Speech, Writing and Context. Osaka, Japan. http://www.kansaigaidai.ac.jp/teachers/toyota/ICSWC2.htm; <[log in to unmask]> (12/02) [further details below] 11 - 15 August 2003. Fourth World Congress on Fluency Disorders. Montreal, Canada. http://www/ifacongress2003.com (05/02) ++ 28 - 31 August 2003. ISCA (International Speech Communication Association) Tutorial and Research Workshop: Error Handling in Spoken Dialogue Systems. Hotel Roc et Neige, Chateau-d'Oex-Vaud, Switzerland. http://www.speech.kth.se/error/; < [log in to unmask]> (12/02) [further details below] 1 - 4 September 2003. EUROSPEECH'2003: 8th European Conference on Speech Communication and Technology. Geneva, Switzerland. http://www.symporg.ch/eurospeech/ (08/01) ++ 5 - 8 September 2003. DiSS'03: Disfluency in Spontaneous Speech. Gothenburg, Sweden. http://www.ling.gu.se/konferenser/diss03/ (12/02) 11 - 14 September 2003. 4th UK Language Variation and Change Conference. University of Sheffield, UK. <[log in to unmask]> (04/02) 8 - 10 October 2003. 2nd International Workshop on the Phonology and Morphology of Creole Languages. University of Siegen, Germany. http://www.uni-siegen.de/~engspra/workshop/ (11/02) 8 - 10 December 2003. 6th International Seminar on Speech Production. Sydney, Australia. http://www.maccs.mq.edu.au/events/2003/issp2003; http://www.maccs.mq.edu.au/mailman/listinfo/speechprodconf (10/02) 23 - 26 March 2004. Speech Prosody 2004. Nara, Japan. <[log in to unmask]> (10/02) [further details below] ++ 29 - 30 March 2004. International Symposium on Tonal Aspects of Languages - with Emphasis on Tone Languages. Institute of Linguistics, Beijing, China. <[log in to unmask]> (12/02) [further details below] ************************* CONFERENCES & WORKSHOPS ************************* CALL FOR PAPERS THEORETICAL AND EXPERIMENTAL NEUROPSYCHOLOGY (TENNET XIV) Montreal, Canada, June 19-21, 2003 The 14th Annual conference on Theoretical and Experimental Neuropsychology, TENNET XIV, will be held in June 2003 in Montreal, Quebec, Canada at Universit du Qubec, Montreal. The basic conference structure is (a) two invited thematic symposia of 3 hours, each day, followed by (b) refereed poster papers. The poster papers are discussed after the second symposium, each afternoon. Participants may submit either abstracts (250 word limit), short papers (maximum of 4 manuscript pages, including one table or figure and up to 5 references) or regular papers for consideration. Abstracts are printed in Brain and Cognition as part of the conference proceedings. Short papers will be reviewed by the Program Committee and published in Brain and Cognition as peer-reviewed 'Brief Reports'. Regular papers will be peer-reviewed, and accepted papers will be published as standard articles. Authors of accepted submissions (abstracts, papers) will be asked to prepare poster presentations for the TENNET conference. The deadline for submissions, via e-mail only, is January 5, 2003. New Information for Refereed Submissions All submissions should deal with a well-defined topic or problem in any domain of experimental, clinical or theoretical neuropsychology, including neurolinguistics, development and history. The title of the presentation, the full name(s) of author(s) (and complete mailing address, with institutional affiliation, if any, telephone number and e-mail) and acknowledgments should appear on the first page of the submission. This information is needed to properly prepare the program if your paper is accepted. Three types of submissions will be considered: 1) An abstract of 250 words or less, for publication as part of the conference proceedings, and to serve as an archival record of a poster presentation; 2) A short paper, with a maximum of 4 manuscript pages (including one table or figure, and up to 5 references). These submissions will be reviewed by the Program Committee, and accepted short papers will be published as peer-reviewed "Brief Reports" in the special issue of Brain and Cognition that contains the conference proceedings; or 3) A regular scientific paper (APA manuscript style is required), including a 200 word abstract and a maximum of three (3) tables or figures. These submissions will go through a standard peer-review process. And accepted papers will appear as regular feature articles in the special issue of Brain and Cognition that contains the conference proceedings. If a paper submission is not accepted, the author may be invited to re-submit a short paper or abstract for publication (see above). If a submission is accepted, then one of the authors must attend the conference to present a poster in order for the abstract or paper to be published in the journal. Your submission should arrive by the January 5 deadline to the TENNET office in Montreal. Submissions received early will be sent for review as soon as they come in. IMPORTANT: Please check your submission with an updated general-purpose antivirus application before sending it by e-mail. Attached files should be in MS Word or RTF. Submissions should be sent to: [log in to unmask] Further information about accommodation, registration, preliminary program and past conferences can be found at the following web site: http://www.tennet.ca Cheers, Henri Cohen & Peter J. Snyder Henri Cohen, Ph.D. Cognitive neuroscience center & Dept. Psychology, UQAM PB 8888, Stn. Centre-Ville Montreal, Qc. H3C 3P8 Canada T: (514) 987-4445 F: (514) 987-8952 I: [log in to unmask] http://www.tennet.ca ******************** CALL FOR PAPERS 2003 Child Phonology Conference University of British Columbia, Vancouver, BC, CANADA UBC is hosting the 2003 Annual Child Phonology Conference July 1-4, 2003. The general schedule for the conference will be as follows: July 1: Evening Reception (optional Canada Day celebrations) July 2: Symposia: Segmental and Feature Development Word and Syllable Structure Development Posters: Other topics July 3: Symposia: Phonology in Context -- perception, morphology/phonology, syntax/morphology, motor development/phonology, metaphonology, literacy, hearing impairment/phonology, etc. Posters: Other topics July 4: Optional: Lab tours, focused research discussion groups We are now calling for symposium paper or poster abstracts for the above topics. Symposium papers will be 20-30 minutes in length. Completed symposia papers need to be received by June 1, in order to be sent to the discussants. Posters will be up for the whole day, with a dedicated discussion time at the end of the day. Posters need not be sent ahead of time. Please send an abstract of no more than 250 words by email to Barbara Bernhardt: <[log in to unmask]> BY JANUARY 31, 2003 Indicate your presentation preference: [ ] symposium paper only [ ] poster only [ ] prefer symposium, but poster OK if necessary [ ] no preference In February, abstracts will be reviewed, symposia organized, and discussants invited. The plan will be adjusted depending on submissions. Note: We are hoping to disseminate the papers from the conference either in a book or on the web. We hope to see you there! Barbara Bernhardt & Joe Stemberger ******************** 2nd International Conference on Speech, Writing and Context 6 - 8 Aug-2003, Osaka, Japan Call Deadline: 31 Jan 2003 The Second International Conference on Speech, Writing and Context (ICSWC2) carries on the tradition set by the first Conference held at the University of Nottingham in July 1998. These Conferences aim to explore the two modes of linguistic communication in their situations of usage. For this second Conference, the specific focus will be on promoting cross-fertilization of concepts from various research fields. The presentations will be organized along three strands: linguistic, literature and language education, and multimedia and information technology. Plenary speakers: Ronald Carter, University of Nottingham, UK Suzanne Eggins, University of New South Wales, Australia Rebecca Hughes, University of Nottingham, UK David Olson, University of Toronto, Canada. Papers should be 20 minutes in duration, with a further 10 minutes allowed for discussion. Further information is available via the conference web-site: http://www.kansaigaidai.ac.jp/teachers/toyota/ICSWC2.htm Contact Person: Hiromi Murakami [log in to unmask] ******************** PRELIMINARY CALL FOR PARTICIPATION ISCA (International Speech Communication Association) Tutorial and Research Workshop: Error Handling in Spoken Dialogue Systems August 28-31, 2003, Hotel Roc et Neige, Chateau-d'Oex-Vaud, Switzerland Workshop website: http://www.speech.kth.se/error/ Webmaster: Gabriel Skantze Email contact: [log in to unmask] Supported by: CLIF (Computational Linguistics in Flanders) SIGDIAL Aims: Spoken dialogue systems in real applications as well as research have attracted increased attention in recent years. With the limitations of current speech technologies, both for recognition and understanding and for speech generation, this interest in `real' systems has led to an increased awareness in the problems raised by system errors, especially in recognizing user input, and the consequent confusion they may lead to for both users and the system itself over the dialogue. The need to devise better strategies for detecting problems in human-machine dialogues and then dealing with them gracefully has become paramount for spoken dialogue systems. This workshop will consider all aspects of how systems can detect and recover from problems in spoken dialogue systems. We will address questions such as: What can we learn from errors in human-human and wizard-of-Oz systems that will help us to handle error in human-machine dialogue systems? How do systems detect when a dialogue is `going wrong'? How do they define such conditions? What factors are the key contributors to and indicators of `bad' dialogues? How do systems identify their own errors? What are the most important causes of such errors, from the user side (e.g. non-native accent, hyperarticulated speaking style, gender, age) and from the system side (e.g. inappropriate prompts)? How difficult is it to determine the causes of particular error? How can we predict which dialogues will be successful? How should we define `success'? What features can best predict it? What mechanisms can be devised to allow systems to recover from error gracefully? Can we devise adaptive strategies to identify patterns of error and respond accordingly? What sorts of behavior do users exhibit when faced with system errors? Can these be taken into account in error handling? What measures (better prompts, anticipation of likely error, better help information) can be taken to minimize possible errors? Papers are invited on innovations in ways that systems can detect their own errors (e.g. through features such as ASR confidence scores); on methods for evaluating spoken dialogue systems that include system errors and error recovery as a major component; on strategies for determining on-line when dialogues are `going wrong'; on mechanisms for recovering once errors are detected; on laboratory and corpus-based studies of human behavior relevant to human-machine problem detection/recovery; on methods for minimizing dialogue problems (e.g. by varying dialogue strategy, system prompts). Position papers are also invited for a special session on aspects of error handling are most in need of additional attention and to propose research approaches in such areas. Invited Speakers: Herb Clark, Stanford University Emiel Krahmer, Tillburg University Mike Phillips, Speechworks Atsushi Shimojima, JAIST Important Dates: Submissions due: March 1, 2003. Notification of Acceptance: April 15, 2003. Deadline for Early Registration: May 1, 2003 Deadline for Regular Registration: June 1, 2003 Deadline for Final Papers: June 1, 2003 Workshop: August 28-31, 2003. Submission requirements: Abstracts of no more than 800 words in length (please state whether this is a submission to a regular session or to the special session on future research) should be submitted electronically by March 1. Details for submission will be available at http://www.speech.kth.se/error/ Workshop Location: Hotel Roc et Neige (http://www.cm.be/images/intersoc/Chateau/chateaug.htm) in the town of Chateau-d'Oex-Vaud (http://www.skiswitzerland.com/chateau/chateau.htm) in the 'alpes vaudoises' in the Lake Geneva region of Switzerland. Accommodation and Registration Fees: TBA Proceedings: Workshop proceedings will be available upon registration at the conference center and subsequently on the workshop web site. Language: The official language of the workshop will be English. ISCA The International Speech Communication Association (ISCA) is a non-profit organization for promoting Speech Communication Science and Technology internationally. For membership and other information, please contact the ISCA secretariat at: c/o Institut fuer Communikationsforschung und Phonetik Universitaet Bonn Poppelsdorfer Allee 47 D-53115 Bonn, Germany Tel: (+49) 228-735638 Fax: (+49) 228-735639 Email: [log in to unmask] URL: http://www.isca-speech.org This workshop is endorsed by SIGdial (www.sigdial.org) and CLIF Organizing Committee: Rolf Carlson, KTH Julia Hirschberg, Columbia University and AT&T Labs -- Research Marc Swerts, University of Antwerp and Technische Universiteit Eindhoven International Scientific Committee: Linda Bell, Telia Research Lou Boves, Nijmegen University Susan Brennan, SUNY Stony Brook Jim Glass, MIT Yasuhiro Katagiri, ATR Emiel Krahmer, Tillburg University Diane Litman, University of Pittsburgh Elmar Noeth, Erlangen University Norbert Reithinger, DFKI Sophie Rosset, LIMSI Alex Rudnicky, CMU Elizabeth Shriberg, SRI Marilyn Walker, AT&T Labs--Research ******************** SPEECH PROSODY 2004 FIRST ANNOUNCEMENT March 23-26, 2004 in Nara, Japan Inspired by the success of the first international conference on prosody 2002 in Aix-en-Provence, Japanese researchers who are working on speech prosody are planning to invite the next conference to Nara, an ancient capital of Japan. The rationale for holding the second conference in Japan is that we have two on-going projects related to prosody: one is "Prosody and Speech Processing," a large-scale National Project supported by a Grant-in-Aid for Scientific Research on Priority Areas from the Ministry of Education, Science and Culture of Japan (headed by Keikichi Hirose at the University of Tokyo), and the other is "Processing of Expressive Speech," funded by the Japan Science & Technology Agency (headed by Nick Campbell at ATR). Since both projects will last until the end of March 2004, it is quite opportune for us to host the conference toward the end of March, when the climate and scenary in Nara are ideal. The current plan is as follows: - Venue: Nara New Convention Center, Nara, Japan - Period: March 23 (Tuesday) to 26 (Friday), 2004 - Committee: Honorary Chair: Hiroya FUJISAKI (Professor Emeritus, University of Tokyo) Chair: Keikichi HIROSE (University of Tokyo) Vice Chair: Nick CAMPBELL (ATR) Secretary: Nobuaki MINEMATSU (University of Tokyo) - Preliminary List of Topics: Analysis, Formulation and Modeling of Prosody Syntax, Semantics, Pragmatics and Prosody Cross-linguistic Studies of Prosody Variations in Prosody Prosody of Dialogues and Spontaneous Speech Prosody of Emotional Speech Prosody and Speech Perception Prosody in Speech Synthesis Prosody in Speech Recognition/Understanding Pathology of Prosody and Aids for the Impaired Speech Corpus for Prosody Research/Applications Others We are also pleased to give preliminary information on a satellite symposium being planned by our colleagues at the Institute of Linguistics, Chinese Academy of Social Sciences (CASS) in Beijing. Here are the plans: - Tentative Title: International Symposium on Tonal Aspects of Languages - with Emphasis on Tone Languages - Venue: Institute of Linguistics, Beijing, China - Period: March 28 (Sunday), 2004, registration March 29 (Monday) to 30 (Tuesday), symposium - Committees (tentative): Honorary Chair: Zongji WU (Emeritus Member, Institute of Linguistics) Chair: Maocan LIN (Institute of Linguistics) Secretary: Aijun LI (Institute of Linguistics) For further information, please contact Keikichi HIROSE mailto:[log in to unmask] (for SP2004) Maocan LIN mailto:[log in to unmask] (for the symposium in Beijing) We hope that our plans will meet the approval of many colleagues working in the fields of research related to speech prosody both in basic sciences and applied technology, and look forward to welcoming you to Nara and Beijing, Keikichi HIROSE and Maocan LIN ******************** POSITIONS VACANT ******************** ASSISTANT PROFESSOR POSITION SPEECH SCIENCE Assistant Professor position available in Speech/Hearing Science; Department of Audiology and Speech Sciences; Purdue University; West Lafayette, IN 47907-1353. This position is a tenure track 10-month position available Fall 2003. Ph.D. required. A successful candidate is expected to pursue an active research program in an area related to normal and/or atypical (e.g., language-impaired, hearing-impaired) developmental aspects of speech perception. Teaching duties include undergraduate and graduate courses in developmental speech perception, acoustics, and related areas. To be assured of full consideration, complete applications should be received by December 6, 2002. However, applications will continue to be accepted until the position is filled. A curriculum vita, letter of application, selected publications/papers, and three letters of recommendation that address the candidate's potential abilities in both teaching and research should be sent to: Jackson T. Gandour, Ph.D., Chair, Search Committee, Department of Audiology and Speech Sciences, Heavilon Hall, Purdue University, West Lafayette, IN 47907-1353. E-mail correspondence should be directed to: [log in to unmask] Purdue is an equal access/equal opportunity/affirmative action employer fully committed to achieving a diverse workforce. http://www.sla.purdue.edu/academic/aus/ ******************** POSTDOCTORAL RESEARCH POSITIONS IN SPEECH, HEARING & SENSORY COMMUNICATION AT INDIANA UNIVERSITY Indiana University is pleased to announce the availability of NIH Postdoctoral Traineeships in Speech, Hearing, and Sensory Communication. Postdocs are available to qualified individuals with Ph.D. degrees who wish to further their background and training in any of the following areas of basic and clinical research: (1) Speech Perception and Production; (2) Spoken Word Recognition and Lexical Access; (3) Auditory Psychophysics and Hearing Science; (4) Clinical and Experimental Audiology; (5) Tactile Psychophysics and Communication; (6) Acoustic and Articulatory Phonetics and Laboratory Phonology; (7) Perceptual and Cognitive Development; (8) Clinical Phonetics and Phonology; (9) Sensory Aids for the Hearing Impaired; (10) Speech Perception in Deaf Children and Adults with Cochlear Implants; (11) Language Development in Hearing Impaired Children and Deaf Children with Cochlear Implants. The program welcomes individuals with background and previous training in Speech and Hearing Sciences, Linguistics, Engineering, Developmental and Experimental Psychology, and Cognitive Science. Trainee salaries, consistent with current NIH guidelines, range from $31,092 to $38,712 plus a modest travel allowance. Trainees are expected to carry out empirical and/or theoretical research and collaborate with core professors and other research scientists currently working in the laboratories and clinics in Bloomington and the Indiana University School of Medicine School in Indianapolis. Interested applicants are strongly encouraged to send: (1) an up-to-date vita, (2) a personal letter describing their specific research interests, goals, and long-term career plans, and (3) reference letters from three people who can describe the applicant's background, interests, research potential and previous accomplishments as soon as possible before Februrary 1, 2003. Reprints and preprints should also be included with the application letter. Women, minority members, and handicapped individuals are urged to apply. NIH guidelines require that 'the individual to be trained must be a citizen or a non-citizen national of the United States or have been lawfully admitted for permanent residence at the time of appointment. Send all correspondence and materials to: Professor David B. Pisoni, Program Director, Indiana University, Department of Psychology, 1101 E. 10th St., Bloomington, Indiana 47405-7007, tel (812)855-1155, fax (812)855-1300 Email: [log in to unmask] Indiana University is an Equal Opportunity/Affirmative Action Employer ******************** PHONETIC TRANSCRIBERS (Full-time or Part-time) (Ref. # R & D 2002/1) Hong Kong Applied Science and Technology/L/Research Institute Company Limited/L/Internet Group Requirements: 1. Strong background in phonetics, with excellent knowledge of the English phonological system. 2. Familiarity with both American and British accents. 3. Practical experience in performing narrow phonetic transcription of spoken English. Duties: To do orthographic and phonetic transcription on speech data. Candidates may work at home or via the Internet. Application Deadline: open until filled. Remuneration: Commensurate with applicant's qualifications and experience. Application Address: 18/F, Tower 6, Gateway, 9 Canton Road, Tsim Sha Tsui, Kowloon, Hong Kong Contact Information: Director of Administration [log in to unmask] ****************** NEW PUBLICATIONS ****************** The phonetics of Wa: Experimental phonetics, phonology, orthography and sociolinguistics. Justin Watkins. 2002 ISBN 0 85883 486 3 xxvii + 226 pp. Australia A$47.85 International A$43.50 This is a linguistic phonetic study of the Northern Mon-Khmer language Wa, spoken by about one million people in an area on the border between China's Ynn Province and Burma's (Myanmar's) Shan State. The aim of this book is to describe the phonetic facts of the sounds of Wa in terms of the simplest segment types without compromising detail, and to illustrate the types of contrasts which distinguish them from one another, so that they may be viewed in a wider, phonetic linguistic, context. It is hoped that sufficient material is presented here to inform a comparison of dialectal variants of Wa and that the instrumental data may be of value in comparing a sound in Wa with similar sounds in other languages. This study aims to be accessible to all those who are interested by the relevance of phonetics to linguistics. It is hoped that certain sections, in particular the background information and the discussion of topics relating to the historical phonology of Wa, may be of interest to a wider readership, namely Mon-Khmerists, those working on other minority languages of South East Asia or elsewhere, or those with a general interest in Wa language, culture or society. Orders may be placed by mail, e-mail or telephone with: Publishing, Imaging and Cartographic Services (PICS) Research School of Pacific and Asian Studies The Australian National University Canberra ACT 0200 Australia Tel: +61 (0)2 6125 3269 Fax: +61 (0)2 6125 9975 mailto:[log in to unmask] Credit card orders are accepted. Other enquiries (but not orders) should go to: The Publications Administrator Pacific Linguistics Research School of Pacific and Asian Studies The Australian National University Canberra ACT 0200 Australia Tel: +61 (0)2 6125 2742 Fax: +61 (0)2 6125 4896 mailto:[log in to unmask] ******************** 'Streaming Speech: Listening and Pronunciation for Advanced Learners of English.' Windows CD-ROM. ISBN 0-9543447-0-7. New Publication from speechinaction. 'Streaming Speech: Listening and Pronunciation for Advanced Learners of English', published by Richard Cauldwell, is a Windows CD-ROM - a ten chapter publication that features recordings of spontaneous speech of people who work at the University of Birmingham, but who come from different parts of the UK and Ireland.. It is a publication in the tradition of Barbara Bradford's 'Intonation in context' and David Brazil's 'Pronunciation for Advanced Learners of English' - but it takes major strides forward in terms of its use of spontaneous speech, and multimedia/web-based technology to link sound and text. It is aimed at EFL/ESL students who aspire to be advanced listeners and speakers of everyday English: those who want to handle fast fluent speech both in listening, and in their own speech. Students recently arrived (or about to arrive) in an English speaking country for study at a university will find 'Streaming Speech' particularly useful - as will those studying for high level oral/aural examinations. It is designed for self-study, though it can be used in classroom mode. It is ideal for non-native-speaker teachers of English who want to improve their oral/aural skills. For listening, particular attention is paid to the fastest stretches of speech - each chapter focuses on those features of natural speech (variability in speed, volume, rhythm, intonation) which change the soundshapes of words - often making them very different (sometimes unrecognisable) from the dictionary forms. For pronunciation, the vowels, consonants, and clusters of English are presented in tone-units/speech-units of unscripted spontaneous speech. Users are required to practise accuracy and fluency simultaneously - accuracy in syllables with the target vowel/consonant, fluency through imitating the flow of the speech-unit in which they occur. Users listen, imitate, record, and compare their versions with the original spontaneous speech. Chapter 9 offers users the choice of six voices on which to model their pronunciation: each speaker presents the vowels and consonants of English. Throughout there is a strong teacher-training component. Section 3 of each chapter 'Discourse Features' focuses on an aspect of the stream of speech, and Chapter 10 provides a summary. The cost for a single-user is 30:00. Write to speechinaction for a brochure at the address below, or go to http://www.speechinaction.com and follow the links to Streaming Speech for further information, and to download an order form. You can pay either by cheque (British Pounds) or by credit card. You can try a sample chapter, and the introduction, on the web at http://www.fab24.net/examples/streamingspeech.htm (You will need Internet Explorer 5.5 or later, and Macromedia Flash Player). Richard Cauldwell speechinaction PO Box 10662 Birmingham B17 0ZE http://www.speechinaction.com ******* LISTS ******* PHONETICS: Invitation for student subscriptions With the new semester beginning in many countries, a lot of you who teach have a new crop of phonetics students. In addition to being a general discussion forum for those who teach or work in phonetics-related fields, the PHONETICS list was also originally intended for *students* studying at all levels of phonetics. We thus issue a call to all teachers of phonetics to invite your new (and old) students to join the list, so that they can extend their involvement in phonetics beyond the classroom into their everyday lives. PHONETICS is a place to ask any kind of phonetics question at all - the only 'stupid' question is one that isn't asked! - or to bring up any phonetics-related topic to discuss with interested subscribers - now over 200, living in more than 30 countries around the world. Discussion lists like PHONETICS can be a valuable resource in stimulating student interest and motivation in a subject, and they create a sense of community - in addition to being a handy place to go for help in a pinch. There are many outstanding phoneticians (if you are one and are not subscribed, you're invited too!) on the list who can give of their knowledge and experience, as well as speakers of a large variety of languages who can offer you firsthand information about phonetic data. If you already have your students' e-mail addresses, you can e-mail them an invitation to join PHONETICS; a suggested text that you can copy and paste follows below. Otherwise you can put on your syllabus, Web site, or the black/whiteboard, the URL to the sign-up page: http://www.topica.com/lists/phonetics/; or you can have students send a blank email to [log in to unmask] There is also a sign-up box at: http://ccms.ntu.edu.tw/~karchung/phon1index.htm. Looking forward to a new semester of lively phonetics discussions! Karen Steffen Chung National Taiwan University [log in to unmask] Homepage: http://ccms.ntu.edu.tw/~karchung/ Sample text to invite students to join PHONETICS: You are invited to join PHONETICS, an international discussion list with over 200 subscribers in more than 30 countries. PHONETICS is a good place to ask any kind of question you have about phonetics, to discuss phonetics-related issues, and to network with others who share an interest in phonetics. To join, enter your e-mail address in the subscription box field at: http://www.topica.com/lists/phonetics/, where you will also find a list description; or send a blank email to [log in to unmask] There is also a sign-up box at: http://ccms.ntu.edu.tw/~karchung/phon1index.htm. You will receive list owner approval shortly after submitting your subscription request. ******************** Deadline for inclusion of items in the next foNETiks: 26 December 2002. Lisa Lim ************************************************************************* Assistant Professor Department of English Language & Literature National University of Singapore tel +65 8746037 Block AS5, 7 Arts Link fax +65 7732981 Singapore 117570 e-mail [log in to unmask] *************************************************************************