Dear Terry, Ken and David,
For some years now I have been trying to explain AFFECT logics to IT colleagues in an effort to enhance the ability of computer games folks to design games that are more interesting to me. So far I have failed.
Recently my wife, reading on her Kindle in the middle of the night, came across a passage from George Elliot that made me cheer. It well expresses my dislike of chess and computer games alike:
"Fancy what a game of chess would be if all the chessman had passions and intellects, more or less small and cunning; if you were not only uncertain about your adversary's men, but a little uncertain also about your own . . . You would be especially likely to be beaten if you depended arrogantly on your mathematical imagination, and regarded your passionate pieces with contempt. Yet this imaginary chess is easy compared with a game man has to play against his fellow-men with other fellow-men for instruments."
--from Felix Holt, the Radical
I find chess boring. The greatest pleasure I get is the pleasure some of my opponents get when they defeat me. That is, their affect is not about the game, merely about defeating someone. After awhile they work out I am not trying and then they get angry, which is also fun. I like it when they take my Queen which they see as a triumph, and it is in mathematical terms. In terms of affect logic however, taking someone's Queen is the silliest move you could make.
Machines are sad because for humans, there is a sadness about all things. While I can well imagine a "sadness of things" algorithm (and I'd be happy to help describe one) I'm not so confident that a meta-algorithm for the" sadness of the sadness of things" could be constructed beyond whimsy. That is, psycho-logic persists in terms of human affects, even in whimsy, but it does not terminate arbitrarily. That is, valorization is immediately available within a Gibsonian kind of affordance (affects are for . . . ). Cognition requires that the affect carries a value (cathexis = hypothesization = valorization) or else cognition comes to a halt. There is no control + alt+delete sequence in a double-take.
cheers as I advance my Queen
keith
>>> Ken Friedman <[log in to unmask]> 19/06/11 2:37 PM >>>
Dear David and Terry,
As Oliver Hardy used to say, "Well, here's another nice mess you've
gotten me into."
Essentially, Terry has made the claim that computers or automated
systems will replace far more than those aspects of design that are
now capable of complete algorithmic description. Terry proposed that
automated systems based on algorithms will be capable of doing what
algorithms cannot now do by excersizing judgment and creative capacity,
including the ability to generate ideas and choose between the ethical
or preferred human value of alternative solutions to specific problems.
David therefore asks the right question. If this is so, then why shouldn't
computers and other automated systems replace us entirely? Terry's
question suggests that he follows Ray Kurweil in pointing us toward what
Kurzweil labels "the singularity," the moment at which computers or artificial
intelligence systems can do anything we can do with our minds.
While I consider this an interesting exercise worth some research, I don't
think that it is a manageable topic for an ordinary PhD thesis. One would
have to answer some massive questions that even the experts have not yet
sorted on the technical side, and on the ethical or social side, it raises
problems that require a much broader scope of expertise than most PhD
students have yet acquired.
Then there is the legal question. Terry has proposed that a designer is
someone that can be held accountable in a court of law for the results of
his or her design. If this is the case, who would be accountable when a
machine gets it wrong?
To my way of thinking, Terry's great virtue lies halfway between Karl Popper
and Alan Turing, offering logically consequential propositions that we can then
criticize and attempt to dismantle. From this, however, follows what I consider
an occasional problem with these kinds of propositions: if one offers these kinds
of ideas from a pure engineering or engineering design perspective, one fails to
account for a range of issues that must also come into play.
I don't have as much hope as Terry does for the future of machines and machine
intelligence. But I'm always interested in his ideas, as there is probably a great
deal more to be won and learned than we know about at present.
On this one, though, I go with David. It's that or risk finding myself in Oliver Hardy's
shoes as I voice my complaints to a mechanized Stan Laurel.
Yours,
Ken
On Sun, 19 Jun 2011 12:01:31 +1000, David Sless wrote:
On 18/06/2011, at 3:15 PM, Terence Love wrote, suggesting a new topic for a phd:
"exploring the potential for the replacement of human creative designers by automated
systems in the next 5-10 years' or something similar."
[David asks] Why stop there? How about:
"exploring the potential for the replacement of human beings by automated systems
in the next 50-100 years' or something similar."
|