Thanks for inviting me to be part of this months discussion.
Since 2003, I have been looking at how we can use the ‘emotions’ of
the body to drive moving image and experiences that both provokes,
exposes, and empathizes with the audience. It took a few years to
gather the research team which I felt was needed to in order to become
further empirically informed to monitor, assess, and provoke the
emotional body in more ‘intelligent’ and ‘meaningful’ ways. Since
2005 I have been resident at the Wellcome Department of Neuroimaging
at Queensquare, UCL currently working with social neuroscientist Chris
Frith whose work deals around the neural basis of social interaction.
For a year I have been visiting artist at the Affective Computing
Group at the MIT Media Lab, mostly working with Ros Picard and Rana El
Kaliouby whose research looks at designing technologies and models to
assess and communicate affective-cognitive states. I am visiting
artist at Brighton and Sussex Medical School, mostly working with
chair of psychiatry and long term neuroscientific collaborator Hugo
Critchley whose work sits around the brain/body mechanisms of emotions
with a focus on the autonomic integration with emotion, as well as
neuroimaging and psychosomatic medicine. I also work with the Human
Computer Interaction Center at UCL, particularly Nadia Berthouze – who
specialises in emotion/HCI. This provides a really fertile ground to
explore my work around the emotional/empathic body. Currently my works
looks to the research and limitations in neuroscience and affective
computing to drive new types of art experiences.
When I say assessing the body in ‘intelligent’ ways, I wanted to work
with neuroscientists to assess the data emitted by the body, to
understand how it relates to a feeling state of the participant, and
match this information to drive more meaningful moving images (not an
easy task - in the past i had taken a more intuitive approach). In
some cases, this means testing the image and physiological responses
to the images in the lab. Also, often the interaction design of these
bio-sensing works means that the viewer had to be constrained, dressed
in obtrusive technology to monitor the body which effects the emotions
of the body (applying the sensors often causes the body to become more
anxious). Most emotion recognition systems and sensors have been
developed by academia and industry, tending to be used in labs, and
therefore not aesthetic, bulky and not naturalistic in their
interactivity. They are also problematic to work with in public
exhibitions (breakdown after a few days of constant use…).
Alternately, the consumer devices are often not robust enough to
monitor the subtleness of the physiological responses (but over the
last year are getting better). Body movement, the enormous variability
of physiological data sets across multiple participants, and
restraining the natural interaction of the audience create
difficulties! By working with affective computing scientists, human
computer interaction specialists and neuroscientists, the aim is to
research and develop more naturalistic, robust and transparent
monitoring techniques. We are looking at existing technology, adapting
it, building a more transparent interface between participant and
artwork. (http://www.tinagonsalves.com/interactive.html).
Also, by ‘meaningful’ - artists often use ambiguous generative
abstract moving images or sound to respond to the data of the body. I
know pulsating circles or shifting colours to represent emotion may
generate a sense of meaning due to its ambiguity, but I was more
interested in working with scientists to produce more figurative and
emotionally narrative based video works that could engage, reflect and
provoke the feelings of the viewer. Most visual emotion research
databases used in emotion neuroscientific studies, such as The
Karolinska, The Ekman and the International Affective Picture System
(IAPS) have been created in labs, informed mostly by scientists, not
artists. From an artistic viewpoint, they are mostly visually
underwhelming and often clichéd. We have built a range potent visual
stimuli databases to provoke emotional responses which have been used
in a range of experiments.
With the scientists, we are hoping that creating ‘labs’ outside of
labs, with stronger visual stimuli, in more naturalistic environments
will create more robust and potent data to understand emotions. Most
of our empirical understandings and mappings of emotions, come from
with in labs, which I wonder, if what we view to be ‘neutral’, is not
a ‘baseline’ of anxiousness? The lab is a difficult environment to
relax in. For our team, a great work is one that becomes a range of
experiments in the lab while also becoming an art experience. When
working in the lab, we need to work in coding the scanners can
understand (limitations), and work in time formats that suit the lab/
scanning technology. The exhibition output has its own difficulties -
the work is often built in a progression of prototypes to fully
understand the concept that umbrellas the complex balance between
sensing methods, audience experience, architectural space, sound and
image. Most curators are a little apprehensive of using exhibitions as
more of a lab to create ‘testing opportunities’ to gather audience
evaluation, space to test the technology, and artist reflection.
Anyway, sorry about the length of this…will stop.
tina gonsalves
http://www.tinagonsalves.com
|