JiscMail Logo
Email discussion lists for the UK Education and Research communities

Help for BCS-HCI Archives


BCS-HCI Archives

BCS-HCI Archives


BCS-HCI@JISCMAIL.AC.UK


View:

Message:

[

First

|

Previous

|

Next

|

Last

]

By Topic:

[

First

|

Previous

|

Next

|

Last

]

By Author:

[

First

|

Previous

|

Next

|

Last

]

Font:

Proportional Font

LISTSERV Archives

LISTSERV Archives

BCS-HCI Home

BCS-HCI Home

BCS-HCI  March 2010

BCS-HCI March 2010

Options

Subscribe or Unsubscribe

Subscribe or Unsubscribe

Log In

Log In

Get Password

Get Password

Subject:

cfp: JMUI Special issue: Real-time affect analysis and interpretation in virtual agents and robots is now PUBLISHED!!!

From:

British HCI News <[log in to unmask]>

Reply-To:

British HCI News <[log in to unmask]>

Date:

Fri, 26 Mar 2010 15:34:12 +0000

Content-Type:

text/plain

Parts/Attachments:

Parts/Attachments

text/plain (178 lines)

~~~~~~~ BRITISH HCI GROUP NEWS SERVICE ~~~~~~~~~~~
~~ http://www.bcs-hci.org.uk/ ~~
~~ All news to: [log in to unmask] ~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~ NOTE: Please reply to article's originator, ~~
~~ not the News Service ~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~


JMUI Special issue on real-time affect analysis and interpretation in virtual agents and robots
is now PUBLISHED

*****************************************************************************

JOURNAL ON MULTIMODAL USER INTERFACES

Special issue: Real-time affect analysis and interpretation: closing the affective loop in virtual agents and robots

Guest Editors: Ginevra Castellano, Kostas Karpouzis, Christopher Peters and Jean-Claude Martin
Volume 3, Issues 1-2, pages 1-153, March 2010
http://www.springerlink.com/content/q2080w072713/?p=86740fad4fe94dcab29705938d923c39&pi=0


EDITORIAL

"Special issue on real-time affect analysis and interpretation: closing the affective loop in virtual agents and robots"

Ginevra Castellano, Kostas Karpouzis, Christopher Peters and Jean-Claude Martin
Pages 1-3


ARTICLES


"On-line emotion recognition in a 3-D activation-valence-time continuum using acoustic and linguistic cues"

Florian Eyben, Martin Wöllmer, Alex Graves, Björn Schuller, Ellen Douglas-Cowie and Roddy Cowie
Pages 7-19

Abstract
For many applications of emotion recognition, such as virtual agents, the system must select responses while the user is speaking. This requires reliable on-line recognition of the user's affect. However most emotion recognition systems are based on turnwise processing. We present a novel approach to on-line emotion recognition from speech using Long Short-Term Memory Recurrent Neural Networks. Emotion is recognised frame-wise in a two-dimensional valence-activation continuum. In contrast to currentstate-of-the-art approaches, recognition is performed on low-level signal frames, similar to those used for speechrecognition. No statistical functionals are applied to low-level feature contours. Framing at a higher level is therefore unnecessary and regression outputs can be produced in real-time for every low-level input frame. We also investigate the benefits of including linguistic features on the signal frame level obtained by a keyword spotter.



"Student mental state inference from unintentional body gestures using dynamic Bayesian networks"

Abdul Rehman Abbasi, Matthew N. Dailey, Nitin V. Afzulpurkar and Takeaki Uno
Pages 21-31

Abstract
Applications that interact with humans would benefit from knowing the intentions or mental states of their users. However, mental state prediction is not only uncertain but also context dependent. In this paper, we present a dynamic Bayesian network model of the temporal evolution of students' mental states and causal associations between mental states and body gestures in context. Our approach is to convert sensory descriptions of student gestures into semantic descriptions of their mental states in a classroom lecture situation. At model learning time, we use expectation maximization (EM) to estimate model parameters from partly labeled training data, and at run time, we use the junction tree algorithm to infer mental states from body gesture evidence. A maximum a posteriori classifier evaluated with leave-one-out cross validation on labeled data from 11 students obtains a generalization accuracy of 97.4% over cases where the student reported a definite mental state, and 83.2% when we include cases where the student reported no mental state. Experimental results demonstrate the validity of our approach. Future work will explore utilization of the model in real-time intelligent tutoring systems.



"Multimodal emotion recognition in speech-based interaction using facial expression, body gesture and acoustic analysis"

Loic Kessous, Ginevra Castellano and George Caridakis
Pages 33-48

Abstract
In this paper a study on multimodal automatic emotion recognition during a speech-based interaction is presented. A database was constructed consisting of people pronouncing a sentence in a scenario where they interacted with an agent using speech. Ten people pronounced a sentence corresponding to a command while making 8 different emotional expressions. Gender was equally represented, with speakers of several different native languages including French, German, Greek and Italian. Facial expression, gesture and acoustic analysis of speech were used to extract features relevant to emotion. For the automatic classification of unimodal data, bimodal data and multimodal data, a system based on a Bayesian classifier was used. After performing an automatic classification of each modality, the different modalities were combined using a multimodal approach. Fusion of the modalities at the feature level (before running the classifier) and at the results level (combining results from classifier from each modality) were compared. Fusing the multimodal data resulted in a large increase in the recognition rates in comparison to the unimodal systems: the multimodal approach increased the recognition rate by more than 10% when compared to the most successful unimodal system. Bimodal emotion recognition based on all combinations of the modalities (i.e., 'face-gesture', 'face-speech' and 'gesture-speech') was also investigated. The results show that the best pairing is 'gesture-speech'. Using all three modalities resulted in a 3.3% classification improvement over the best bimodal results.



"Multimodal user's affective state analysis in naturalistic interaction"

George Caridakis, Kostas Karpouzis, Manolis Wallace, Loic Kessous and Noam Amir
Pages 49-66

Abstract
Affective and human-centered computing have attracted an abundance of attention during the past years, mainly due to the abundance of environments and applications able to exploit and adapt to multimodal input from the users. The combination of facial expressions with prosody information allows us to capture the users' emotional state in an unintrusive manner, relying on the best performing modality in cases where one modality suffers from noise or bad sensing conditions. In this paper, we describe a multi-cue, dynamic approach to detect emotion in naturalistic video sequences, where input is taken from nearly real world situations, contrary to controlled recording conditions of audiovisual material. Recognition is performed via a recurrent neural network, whose short term memory and approximation capabilities cater for modeling dynamic events in facial and prosodic expressivity. This approach also differs from existing work in that it models user expressivity using a dimensional representation, instead of detecting discrete 'universal emotions', which are scarce in everyday human-machine interaction. The algorithm is deployed on an audiovisual database which was recorded simulating human-human discourse and, therefore, contains less extreme expressivity and subtle variations of a number of emotion labels. Results show that in turnslasting more than a few frames, recognition rates rise to 98%.



"From expressive gesture to sound - The development of an embodied mapping trajectory inside a musical interface"

Pieter-Jan Maes, Marc Leman, Micheline Lesaffre, Michiel Demey and Dirk Moelants
Pages 67-78

Abstract
This paper contributes to the development of a multimodal, musical tool that extends the natural action range of the human body to communicate expressiveness into the virtual music domain. The core of this musical tool consists of a low cost, highly functional computational model developed upon the Max/MSP platform that (1) captures real-time movement of the human body into a 3D coordinate system on the basis of the orientation output of any type of inertial sensor system that is OSC-compatible, (2) extract low-level movement features that specify the amount of contraction/expansion as a measure of how a subject uses the surrounding space, (3) recognizes these movement features as being expressive gestures, and (4) creates a mapping trajectory between these expressive gestures and the sound synthesis process of adding harmonic related voices on an in origin monophonic voice. The concern for a user-oriented and intuitive mapping strategy was thereby of central importance. This was achieved by conducting an empirical experiment based on theoretical concepts from the embodied music cognition paradigm. Based on empirical evidence, this paper proposes a mapping trajectory that facilitates the interaction between a musician and his instrument, the artistic collaboration between (multimedia) artists and the communication of expressiveness in a social, musical context.



"The mental ingredients of bitterness"

Isabella Poggi and Francesca D'Errico
Pages 79-86

Abstract
In view of multimodal interfaces capable of a detailed representation of the User's possible emotions, the paper analyses bitterness in terms of its mental ingredients, the beliefs and goals represented in the mind of a person when feeling an emotion. Bitterness is a negative emotion in between anger and sadness: like anger, it is caused by a sense of injustice, but also entails a sense of impotence which makes it similar to sadness. Often caused by betrayal, it comes from the disappointment of an expectation from oneself or anothers with whom one is affectively involved, or from a disproportion between commitment and actual results. The ingredients found in a pilot study were tested through qualitative analysis of a further questionnaire, which confirmed the ingredients hypothesized, further revealing the different nature of bitterness across ages and across types of work.



"Affect recognition for interactive companions: Challenges and design in real world scenarios"

Ginevra Castellano, Iolanda Leite, André Pereira, Carlos Martinho, Ana Paiva and Peter W. McOwan
Pages 89-98

Abstract
Affect sensitivity is an important requirement for artificial companions to be capable of engaging in social interaction with human users. This paper provides a general overview of some of the issues arising from the design of an affect recognition framework for artificial companions. Limitations and challenges are discussed with respect to other capabilities of companions and a real world scenario where an iCat robot plays chess with children is presented. In this scenario, affective states that a robot companion should be able to recognise are identified and the non-verbal behaviours that are affected by the occurrence of these states in the children are investigated. The experimental results aim to provide the foundation for the design of an affect recognition system for a game companion: in this interaction scenario children tend to look at the iCat and smile more when they experience a positive feeling and they are engaged with the iCat.



"When my robot smiles at me - Enabling human-robot rapport via real-time head gesture mimicry"

Laurel D. Riek, Philip C. Paul and Peter Robinson
Pages 99-108

Abstract
People use imitation to encourage each other during conversation. We have conducted an experiment to investigate how imitation by a robot affect people's perceptions of their conversation with it. The robot operated in one of three ways: full head gesture mimicking, partial head gesture mimicking (nodding), and non-mimicking (blinking). Participants rated how satisfied they were with the interaction. We hypothesized that participants in the full head gesture condition will rate their interaction the most positively, followed by the partial and non-mimicking conditions. We also performed gesture analysis to see if any differences existed between groups, and did find that men made significantly more gestures than women while interacting with the robot. Finally, we interviewed participants to try to ascertain additional insight into their feelings of rapport with the robot, which revealed a number of valuable insights.



"Communication of musical expression by means of mobile robot gestures"

Birgitta Burger and Roberto Bresin
Pages 109-118

Abstract
We developed a robotic system that can behave in an emotional way. A 3-wheeled simple robot with limited degrees of freedom was designed. Our goal was to make the robot displaying emotions in music performance by performing expressive movements. These movements have been compiled and programmed based on literature about emotion in music, musicians' movements in expressive performances, and object shapes that convey different emotional intentions. The emotions happiness, anger, and sadness have been implemented in this way. General results from behavioral experiments show that emotional intentions can be synthesized, displayed and communicated by an artificial creature, also in constrained circumstances.



"Investigating shared attention with a virtual agent using a gaze-based interface"

Christopher Peters, Stylianos Asteriadis and Kostas Karpouzis
Pages 119-130

Abstract
This paper investigates the use of a gaze-based interface for testing simple shared attention behaviours during an interaction scenario with a virtual agent. The interface is non-intrusive, operating in real-time using a standard web-camera for input, monitoring users' head directions and processing them in real-time for resolution to screen coordinates. We use the interface to investigate user perception of the agent's behaviour during a shared attention scenario. Our aim is to elaborate important factors to be considered when constructing engagement models that must account not only for behaviour in isolation, but also for the context of the interaction, as is the case during shared attention situations.



"HMM modeling of user engagement in advice-giving dialogues"

Nicole Novielli
Pages 131-140

Abstract
This research aims at defining a real-time probabilistic model of user's engagement in advice-giving dialogues. We propose an approach based on Hidden Markov Models (HMMs) to describe the differences in the dialogue pattern due to the different level of engagement experienced by the users. We train our HMM models on a corpus of natural dialogues with an Embodied Conversational Agent (ECA) in the domain of healthy-eating. The dialogues are coded in terms of Dialogue Acts associated to each system or user move. Results are quite encouraging: HMMs are a powerful formalism for describing the differences in the dialogue patterns, due to the different level of engagement of users and they can be successfully employed in real-time user's engagement detection. Though, the HMM learning process shows a lack of robustness when using low-dimensional and skewed corpora. Therefore we plan a further validation of our approach with larger corpora in the near future.



"Natural interaction with a virtual guide in a virtual environment - A multimodal dialogue system"

Dennis Hofs, Mariët Theune and Rieks op den Akker
Pages 141-153

Abstract
This paper describes the Virtual Guide, a multimodal dialogue system represented by an embodied conversational agent that can help users to find their way in a virtual environment, while adapting its affective linguistic style to that of the user. We discuss the modular architecture of the system, and describe the entire loop from multimodal input analysis to multimodal output generation. We also describe how the Virtual Guide detects the level of politeness of the user's utterances in real-time during the dialogue and aligns its own language to that of the user, using different politeness strategies. Finally we report on our first user tests, and discuss some potential extensions to improve the system.




~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~ To receive HCI news, send the message: ~~
~~ "JOIN BCS-HCI your_firstname your_lastname" ~~
~~ to [log in to unmask] ~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~ Newsarchives: ~~
~~ http://www.jiscmail.ac.uk/lists/bcs-hci.html ~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~ To join the British HCI Group, contact ~~
~~ [log in to unmask] ~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

On 25 February 2009, the University launched its new name, Edinburgh Napier University.  

For more information please visit our website.

Edinburgh Napier University is one of the top 10 universities in the UK for graduate employability (HESA 2009)

This message is intended for the addressee(s) only and should not be read, copied or disclosed to anyone else out-with the University without the permission of the sender.
It is your responsibility to ensure that this message and any attachments are scanned for viruses or other defects. Edinburgh Napier University does not accept liability for any loss or damage which may result from this email or any attachment, or for errors or omissions arising after it was sent. Email is not a secure medium. Email entering the University's system is subject to routine monitoring and filtering by the University.

Edinburgh Napier University is a registered Scottish charity. Registration number SC018373

Top of Message | Previous Page | Permalink

JiscMail Tools


RSS Feeds and Sharing


Advanced Options


Archives

May 2024
April 2024
March 2024
February 2024
October 2023
September 2023
August 2023
July 2023
June 2023
May 2023
April 2023
March 2023
February 2023
January 2023
December 2022
November 2022
October 2022
September 2022
August 2022
July 2022
June 2022
May 2022
April 2022
March 2022
February 2022
January 2022
December 2021
November 2021
October 2021
September 2021
August 2021
July 2021
June 2021
May 2021
April 2021
March 2021
February 2021
January 2021
December 2020
November 2020
October 2020
September 2020
August 2020
July 2020
June 2020
May 2020
April 2020
March 2020
February 2020
January 2020
December 2019
November 2019
October 2019
September 2019
August 2019
July 2019
June 2019
May 2019
April 2019
March 2019
February 2019
January 2019
December 2018
November 2018
October 2018
September 2018
August 2018
July 2018
June 2018
May 2018
April 2018
March 2018
February 2018
January 2018
December 2017
November 2017
October 2017
September 2017
August 2017
July 2017
June 2017
May 2017
April 2017
March 2017
February 2017
January 2017
December 2016
November 2016
October 2016
September 2016
August 2016
July 2016
June 2016
May 2016
April 2016
March 2016
February 2016
January 2016
December 2015
November 2015
October 2015
September 2015
August 2015
July 2015
June 2015
May 2015
April 2015
March 2015
February 2015
January 2015
December 2014
November 2014
October 2014
September 2014
August 2014
July 2014
June 2014
May 2014
April 2014
March 2014
February 2014
January 2014
December 2013
November 2013
October 2013
September 2013
August 2013
July 2013
June 2013
May 2013
April 2013
March 2013
February 2013
January 2013
December 2012
November 2012
October 2012
August 2012
July 2012
June 2012
May 2012
April 2012
March 2012
February 2012
January 2012
December 2011
November 2011
October 2011
September 2011
August 2011
July 2011
June 2011
May 2011
April 2011
March 2011
February 2011
January 2011
December 2010
November 2010
October 2010
September 2010
August 2010
July 2010
June 2010
May 2010
April 2010
March 2010
February 2010
January 2010
December 2009
November 2009
October 2009
September 2009
August 2009
July 2009
June 2009
May 2009
April 2009
March 2009
February 2009
January 2009
December 2008
November 2008
October 2008
September 2008
August 2008
July 2008
June 2008
May 2008
April 2008
March 2008
February 2008
January 2008
December 2007
November 2007
October 2007
September 2007
August 2007
July 2007
June 2007
May 2007
April 2007
March 2007
February 2007
January 2007
December 2006
November 2006
October 2006
September 2006
August 2006
July 2006
June 2006
May 2006
April 2006
March 2006
February 2006
January 2006
December 2005
November 2005
October 2005
September 2005
August 2005
July 2005
June 2005
May 2005
April 2005
March 2005
February 2005
January 2005
December 2004
November 2004
October 2004
September 2004
August 2004
July 2004
June 2004
May 2004
April 2004
March 2004
February 2004
January 2004
December 2003
November 2003
October 2003
September 2003
August 2003
July 2003
June 2003
May 2003
April 2003
March 2003
February 2003
January 2003
December 2002
November 2002
October 2002
September 2002
August 2002
July 2002
June 2002
May 2002
April 2002
March 2002
February 2002
January 2002
December 2001
November 2001
October 2001
September 2001
August 2001
July 2001
June 2001
May 2001
April 2001
March 2001
February 2001
January 2001
December 2000
November 2000
October 2000
September 2000
August 2000
July 2000
June 2000
May 2000
April 2000
March 2000
February 2000
January 2000
December 1999
November 1999
October 1999
September 1999
August 1999
July 1999
June 1999
May 1999
April 1999
March 1999
February 1999
January 1999
December 1998
November 1998
October 1998
September 1998


JiscMail is a Jisc service.

View our service policies at https://www.jiscmail.ac.uk/policyandsecurity/ and Jisc's privacy policy at https://www.jisc.ac.uk/website/privacy-notice

For help and support help@jisc.ac.uk

Secured by F-Secure Anti-Virus CataList Email List Search Powered by the LISTSERV Email List Manager