Print

Print


RE: Automata and redefinition of design practicedear wolfgang,

i know you didn't call for models of emotions and feelings, terry did.
since i addressed my reply to both of you, you had the option to pick what
applies.  sorry for not discriminating on my part.

what you consider to be a statement about the future is already present.
many users attribute friendliness or hostility to computers and their liking
or disliking their users  (see the book media equation).   but this hasn't
created a model of emotions nor dispelled the widely held belief that the
emotions we attribute to people (and things) are only inside the mind or
body of the atributee.  without getting into an elaborate argument here, a
good deal of what we call emotions are the product of situation specific
(verbal) attribution and the compliance or learning of such attributions on
the part of those whose emotions are attributed.  hence emotions are
socially distinguished and individually complied with -- as evidenced by
significant cross-cultural differences in both their distinctions and the
situational appropriateness of having them.

klaus
  -----Original Message-----
  From: PhD-Design - This list is for discussion of PhD studies and related
research in Design [mailto:[log in to unmask]]On Behalf Of Wolfgang
Jonas
  Sent: Monday, January 16, 2006 3:02 AM
  To: [log in to unmask]
  Subject: Re: Automata and redefinition of design practice


  Dear Klaus,


  just to clarify:


  I don't set my hopes on robots that could learn emotions.


  What I said was:


    there may be robots that appear "human" to an
    observer in the not so far future. And they will probably contribute
    to our understanding (or at least our modelling) of human emotional
    processes.


  This is something else.


  Best,


  Jonas


  ___________




  At 13.43 Uhr -0500 15/01/2006, Klaus Krippendorff wrote:
    dear terry and jonas,

    don't set your hopes on robots that could learn emotions.

    emotions have something to do with how cognition is embodied, the
unnoticed
    substrate of processes.  since robots are designed and of a material
    different from humans, they may become jammed, slow, wear out or break
down,
    but i wouldn't call that emotions.

    also the distinguishing of emotions is learned in language.  this
accounts
    for the fact that different cultures recognize different emotions,
exhibit
    different emotional responses (which includes having different
aesthetics --
    why is swiss concrete art different from indian iconography and chinese
    paintings?)

    klaus

    -----Original Message-----
    From: PhD-Design - This list is for discussion of PhD studies and
    related research in Design [mailto:[log in to unmask]]On Behalf
    Of Wolfgang Jonas
    Sent: Sunday, January 15, 2006 7:31 AM
    To: [log in to unmask]
    Subject: Re: Automata and redefinition of design practice


    Dear Terry,

    thanks for your quick response this Sunday afternoon (cold and sunny
    in Berlin).

    I fully agree with you: there may be robots that appear "human" to an
    observer in the not so far future. And they will probably contribute
    to our understanding (or at least our modelling) of human emotional
    processes.

    My doubts remain as to their benefit for "real world" design
    processes. I see the strange paradox that the better these robots
    are, the more they are like ordinary people. One criterion for
    perfection (see Turing) is that it is impossible to distinguish them
    from a human being. So what is the gain if we have such an artificial
    participant in a design communication?

    Maybe my thoughts are too naive... or not radical enough yet...

    Best,

    Jonas

    __________


    At 20.00 Uhr +0800 15/01/2006, Terence Love wrote:
    >Hi Jonas,
    >Thanks for your message. I understand your concern about simple
rationalist
    >models of emotion! There is some evidence that there is deep change in
this
    >area.
    >The relatively recent shift in understanding in relation to the
complexity
    >of emotional learning in AI is that sophisticated emotion-based
learning
    >responses appear to require and depend a real physical system that
    interacts
    >with the real world. This contrasts with earlier attempts to model
emotion
    >and feelings 'virtually' and rationally in software in the same way
that
    >e.g. case-based reasoning uses a rules engine processing data.
    >This suggests that the future development of automated design software
that
    >includes value judgments and builds on emotions and feeling responses
will
    >require some form of physically real robotic user that interacts with
this
    >designed world we have. It also suggests that it will require time,
perhaps
    >substantial amounts of time, for the learning processes. The approach
may
    >however offer the possibility of an easier transfer of learning between
    >robot entities that will improve on humans use of gossip, books, theory
and
    >lectures.
    >
    >Best wishes,
    >Terry
    >____________________
    >===snip
    >
    >I mistrust models of emotion and their outcomes, because - if they
    >are good - they are as complex and as arbitrary and as unpredictable
    >as my own.
    >
    >Designing is proceeding in communication (by means of language for
    >the main part), i.e. in the interaction of these models. Therefore I
    >cannot really see the benefit (yet) of artificial participants in
    >this game (except for the rational part, of course).