dear terry and jonas,
don't set your hopes on robots that could learn emotions.
emotions have something to do with how cognition is embodied, the unnoticed
substrate of processes. since robots are designed and of a material
different from humans, they may become jammed, slow, wear out or break down,
but i wouldn't call that emotions.
also the distinguishing of emotions is learned in language. this accounts
for the fact that different cultures recognize different emotions, exhibit
different emotional responses (which includes having different aesthetics --
why is swiss concrete art different from indian iconography and chinese
paintings?)
klaus
-----Original Message-----
From: PhD-Design - This list is for discussion of PhD studies and
related research in Design [mailto:[log in to unmask]]On Behalf
Of Wolfgang Jonas
Sent: Sunday, January 15, 2006 7:31 AM
To: [log in to unmask]
Subject: Re: Automata and redefinition of design practice
Dear Terry,
thanks for your quick response this Sunday afternoon (cold and sunny
in Berlin).
I fully agree with you: there may be robots that appear "human" to an
observer in the not so far future. And they will probably contribute
to our understanding (or at least our modelling) of human emotional
processes.
My doubts remain as to their benefit for "real world" design
processes. I see the strange paradox that the better these robots
are, the more they are like ordinary people. One criterion for
perfection (see Turing) is that it is impossible to distinguish them
from a human being. So what is the gain if we have such an artificial
participant in a design communication?
Maybe my thoughts are too naive... or not radical enough yet...
Best,
Jonas
__________
At 20.00 Uhr +0800 15/01/2006, Terence Love wrote:
>Hi Jonas,
>Thanks for your message. I understand your concern about simple rationalist
>models of emotion! There is some evidence that there is deep change in this
>area.
>The relatively recent shift in understanding in relation to the complexity
>of emotional learning in AI is that sophisticated emotion-based learning
>responses appear to require and depend a real physical system that
interacts
>with the real world. This contrasts with earlier attempts to model emotion
>and feelings 'virtually' and rationally in software in the same way that
>e.g. case-based reasoning uses a rules engine processing data.
>This suggests that the future development of automated design software that
>includes value judgments and builds on emotions and feeling responses will
>require some form of physically real robotic user that interacts with this
>designed world we have. It also suggests that it will require time, perhaps
>substantial amounts of time, for the learning processes. The approach may
>however offer the possibility of an easier transfer of learning between
>robot entities that will improve on humans use of gossip, books, theory and
>lectures.
>
>Best wishes,
>Terry
>____________________
>===snip
>
>I mistrust models of emotion and their outcomes, because - if they
>are good - they are as complex and as arbitrary and as unpredictable
>as my own.
>
>Designing is proceeding in communication (by means of language for
>the main part), i.e. in the interaction of these models. Therefore I
>cannot really see the benefit (yet) of artificial participants in
>this game (except for the rational part, of course).
|