I don't think the robots themselves will be particularly useful (except
maybe as subjects of study). I think what will be far more useful will
be the knowledge gained from learning to make the robots. If we can
make robots that are (in some ways) like us, then we'll have a good
model of (some parts of) us. Those models, I would hope, will
eventually inform designers by posing an explanation of why we think and
behave as we do. Designers knowing this sort of thing could be better
armed to design for their clients.
Cheers.
Fil
Wolfgang Jonas wrote:
> Dear Terry,
>
> thanks for your quick response this Sunday afternoon (cold and sunny in
> Berlin).
>
> I fully agree with you: there may be robots that appear "human" to an
> observer in the not so far future. And they will probably contribute to
> our understanding (or at least our modelling) of human emotional processes.
>
> My doubts remain as to their benefit for "real world" design processes.
> I see the strange paradox that the better these robots are, the more
> they are like ordinary people. One criterion for perfection (see Turing)
> is that it is impossible to distinguish them from a human being. So what
> is the gain if we have such an artificial participant in a design
> communication?
>
> Maybe my thoughts are too naive... or not radical enough yet...
>
> Best,
>
> Jonas
>
> __________
>
>
> At 20.00 Uhr +0800 15/01/2006, Terence Love wrote:
>
>> Hi Jonas,
>> Thanks for your message. I understand your concern about simple
>> rationalist
>> models of emotion! There is some evidence that there is deep change in
>> this
>> area.
>> The relatively recent shift in understanding in relation to the
>> complexity
>> of emotional learning in AI is that sophisticated emotion-based learning
>> responses appear to require and depend a real physical system that
>> interacts
>> with the real world. This contrasts with earlier attempts to model
>> emotion
>> and feelings 'virtually' and rationally in software in the same way that
>> e.g. case-based reasoning uses a rules engine processing data.
>> This suggests that the future development of automated design software
>> that
>> includes value judgments and builds on emotions and feeling responses
>> will
>> require some form of physically real robotic user that interacts with
>> this
>> designed world we have. It also suggests that it will require time,
>> perhaps
>> substantial amounts of time, for the learning processes. The approach may
>> however offer the possibility of an easier transfer of learning between
>> robot entities that will improve on humans use of gossip, books,
>> theory and
>> lectures.
>>
>> Best wishes,
>> Terry
>> ____________________
>> ===snip
>>
>> I mistrust models of emotion and their outcomes, because - if they
>> are good - they are as complex and as arbitrary and as unpredictable
>> as my own.
>>
>> Designing is proceeding in communication (by means of language for
>> the main part), i.e. in the interaction of these models. Therefore I
>> cannot really see the benefit (yet) of artificial participants in
>> this game (except for the rational part, of course).
--
Filippo A. Salustri, Ph.D., P.Eng.
Department of Mechanical and Industrial Engineering
Ryerson University
350 Victoria St, Toronto, ON, M5B 2K3, Canada
Tel: 416/979-5000 ext 7749
Fax: 416/979-5265
Email: [log in to unmask]
http://deed.ryerson.ca/~fil/
|