Print

Print


~~~~~~~ BRITISH HCI GROUP NEWS SERVICE ~~~~~~~~~~~
~~         http://www.bcs-hci.org.uk/           ~~
~~ All news to: [log in to unmask]  ~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~ NOTE: Please reply to article's originator,  ~~
~~ not the News Service                         ~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
                   WORKSHOP ANNOUNCEMENT AND CALL FOR PAPERS

        Representing, Annotating, and Evaluating Non-Verbal and Verbal
          Communicative Acts to Achieve Contextual Embodied Agents
                http://aos2.uniba.it:8080/aa-ws.html

                             May 29, 2001
                           Montreal, Canada

                          in conjunction with
        The Fifth International Conference on Autonomous Agents
                    http://www.csc.liv.ac.uk/~agents2001/


Embodied agents are receiving more and more attention due to the
variety of applications, ranging from synthetic actors, pedagogical
agents to hearing-impaired helpers. To make the agent interesting to
interact with as well as useful for a given application, the agent
should behave and talk according to the context where the interaction
takes place and to the interlocutor the agent is talking to. Moreover,
while talking, verbal and nonverbal signals are at work and should be
highly integrated with each others. Interactive systems, then, require
a methodology to automatically coordinate verbal and nonverbal
signals.  Using too simple models of communicative behavior might
produce a poor agent with which the User will rapidly feel bored.
Several problems arise in attempting to synchronize communication
signals with speech:

- One needs to define which communicative signals are relevant for a
  given application to make the agent believable. Some applications
  require caricature and over-expressive behaviors, while others need
  very accurate and realistic behaviors.

- One needs to consider the context of the interaction. Is the
  application directed to children, adults, computer scientists? Do we
  want a friendly agent or an authoritative agent? How to integrate 3D
  world objects in the context of the conversation (say, objects to
  point at or to grab while explaining a task, physical objects to
  play with)?

- Having defined these elements, one needs to represent them and find
  a notation to describe them.  In addition, the problem of annotation
  schemas to be employed in multimodal (verbal and nonverbal) signals
  has to be solved, that is, how to annotate a discourse also
  including nonverbal information.


MAJOR TOPICS

Collecting and using empirical data as a basis for communicative behavior:

- how to collect, analyze and exploit empirical data from human-human
  interaction with regard to verbal / nonverbal inter-dependencies
  (timing, co-occurrence, context);

- how to specify which communicative signals (type of facial
  expressions, gaze and gestures...) to consider and model for the
  agent;

- how to assess, interpret, and determine responsivenss to the context
  in which the interaction takes place, that is, which elements of the
  context are relevant to the conversation and how to represent these
  elements;

- how to design tools for coding, viewing and analyzing multimodal
  interactions;

Representation of multimodal information as a basis for communicative
behavior:

- how to specify geometric and function properties of real objects for
  physical embodied agent to interact with;

- which models to use for emotions and personality for both the agent
  and the user (how to represent the ``affect display'' function
  and/or the potential affective effect of multimodal signals);

- how to represent communicative signals and which notations to use
  (with which type of facial expression, gaze, and gesture as they are
  manifested);

-  how to represent the  meaning level of communicative signals;

- how to represent the knowledge used by the agent;

- which discourse annotation should be used to integrate verbal and
  nonverbal information.

- how to ensure synchronization between verbal and nonverbal signals.

System architecture and evaluation :

- how to define a general architecture to generate agents
  communicating multimodally;

- how to evaluate multimodal interaction with an embodied agent (which
  modality causes which effect to what degree? Do the modeled
  communicative functions reach the intended effect?);


WORKSHOP FORMAT

The workshop will be made of presentations from selected papers. The
workshop orientation is more toward computational model and
implemented systems rather than purely theoretical statements.
Presentation of the theoretical basis of a system could of course be
included in the paper but should not be the main and only focus of the
paper.  The presentations could cover one of the several themes of the
workshop and should be oriented toward:

- description of the annotating scheme for verbal and non-verbal
  signals. The presentation should be technically-based rather than purely
  theoretical.
- description of implemented embodied agent systems. The presentation
  should be on the design and implementation process of these systems.
- description of the evaluation scheme used for embodied agent systems.


SUBMISSIONS FORMAT

Attendance to the workshop will be based on acceptance of paper. Paper
submission will be required. Paper length should not extend 6 pages
long (using 11pt, single space, all margins of 2cm) and should be
accompanied as much as possible with an animation describing the work
presented. Every paper submitted will be reviewed by at least 2
reviewers from the program committee.

Submissions should be emailed to Catherine Pelachaud
([log in to unmask]) in PS or PDF format by March 16th, 2001. If
email is not possible, please sent two copies of your paper to:

Catherine Pelachaud
University of Rome "La Sapienza"
Department of computer science and system
via Buonarroti, 12
00185 Rome Italy

Note: Workshop participants will be required to register for the
Autonomous Agents 2001 main conference.


IMPORTANT DATES

March 16 - Deadline for paper Submission
April 1  - Notification of Acceptance / Rejection
April 23 - Deadline for camera-ready paper
May   29 - Workshop date


ORGANIZING COMMITTEE

- Catherine Pelachaud, University of Rome ``La Sapienza'', Italy
- Isabella Poggi, University of Rome Tre, Italy


PROGRAM COMMITTEE

- Norman Badler, University of Pennsylvania, USA
- Berardina de Carolis, University of Bari, Italy
- James Lester, North Carolina State University, USA
- Nadia Magnenat-Thalmann, MiraLab, University of Geneve, Switzerland
- Jan-Torsten Milde, University of Bielefeld, Germany
- Clifford Nass, Stanford University, USA
- Sharon Oviatt, Oregon Graduate Istitute, USA
- Catherine Pelachaud, University of Rome ``La Sapienza,'' Italy
- Isabella Poggi, University of Rome Tre, Italy
- Jeff Rickel, USC Information Sciences Institute, USA
- Yasuyuki Sumi, ATR, Japan

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~ To receive HCI news, send the message:       ~~
~~ "JOIN BCS-HCI your_firstname your_lastname"  ~~
~~ to [log in to unmask]                 ~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~ Newsarchives:                                ~~
~~ http://www.jiscmail.ac.uk/lists/bcs-hci.html ~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~ To join the British HCI Group, contact       ~~
~~ [log in to unmask]                               ~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~