JiscMail Logo
Email discussion lists for the UK Education and Research communities

Help for CYBER-SOCIETY-LIVE Archives


CYBER-SOCIETY-LIVE Archives

CYBER-SOCIETY-LIVE Archives


CYBER-SOCIETY-LIVE@JISCMAIL.AC.UK


View:

Message:

[

First

|

Previous

|

Next

|

Last

]

By Topic:

[

First

|

Previous

|

Next

|

Last

]

By Author:

[

First

|

Previous

|

Next

|

Last

]

Font:

Monospaced Font

LISTSERV Archives

LISTSERV Archives

CYBER-SOCIETY-LIVE Home

CYBER-SOCIETY-LIVE Home

CYBER-SOCIETY-LIVE  2004

CYBER-SOCIETY-LIVE 2004

Options

Subscribe or Unsubscribe

Subscribe or Unsubscribe

Log In

Log In

Get Password

Get Password

Subject:

[CSL]: NetFuture #159

From:

J Armitage <[log in to unmask]>

Reply-To:

Interdisciplinary academic study of Cyber Society <[log in to unmask]>

Date:

Wed, 8 Dec 2004 08:07:44 -0000

Content-Type:

text/plain

Parts/Attachments:

Parts/Attachments

text/plain (528 lines)

From: Steve Talbott [mailto:[log in to unmask]]
Sent: 07 December 2004 20:08
To: [log in to unmask]
Subject: NetFuture #159

                                 NETFUTURE

                    Technology and Human Responsibility

 =========================================================================
Issue #159 A Publication of The Nature Institute December 7, 2004
 =========================================================================
             Editor: Stephen L. Talbott ([log in to unmask])

                  On the Web: http://www.netfuture.org/
     You may redistribute this newsletter for noncommercial purposes.

Can we take responsibility for technology, or must we sleepwalk
in submission to its inevitabilities? NetFuture is a voice for
responsibility. It depends on the generosity of those who support its
goals. To make a contribution: http://www.netfuture.org/support.html.


CONTENTS:
---------

Editor's Note

Quotes and Provocations
   Pianists and Video Game Players

Invisible Tools, Emotionally Supportive Pals, Or ... ? (Steve Talbott)
   On putting machines into their place

DEPARTMENTS

About this newsletter

 =========================================================================

                              EDITOR'S NOTE

A reminder that The Nature Institute's website (new and improved version)
is now up and is being periodically updated: http://natureinstitute.org.
All back issues of NetFuture are also accessible from the site.

SLT

 =========================================================================

                         QUOTES AND PROVOCATIONS


Pianists and Video Game Players
-------------------------------

Raj Reddy, a Carnegie Mellon professor, is working on a low-cost,
wireless, all-in-one PC/television/DVD player/videophone for users in
developing countries. One person who seems to like the idea is University
of California (Berkeley) administrator, Tom Kalil. "Entertainment is the
killer app[lication]", he says, "and that will smuggle something that is a
lot more sophisticated into the home".

You might well hesitate over the logic of this sentiment. If the more
worthwhile and sophisticated things have so little apparent value that
they must be smuggled into people's lives under the cover of other stuff,
and if this other stuff is what possesses the killer appeal ... well, we
at least ought to wonder whether this is rather like handing someone a
plateful of chocolates with a stalk of broccoli on the side and then
saying, "Here, eat what you'd like. (Ha ha. We sure pulled a fast one on
him, didn't we?)"

Smuggling the good under cover of the questionable or, in some cases, the
downright despicable, has become one of the cliches of high-tech culture.
It is an extremely useful cliche, since there is scarcely any human
activity in which you cannot find (or invent) some redeeming value. Just
the other day I saw a news item about a video game called "JFK: Reloaded",
which allows players to recreate and participate in the assassination of
President John F. Kennedy. Faced with criticism, the company producing
the game called it an educational "docugame" that would "stimulate a
younger generation of players to take an interest in this fascinating
episode of American history". Similarly, I suppose you could describe the
older game where drivers score points by running over pedestrians as a
social policy game stimulating interest in issues of public safety.

But my concern at the moment is with what may be the single most common
illustration of the cliche. However disgusting the video game, we almost
always hear that "at least it improves hand-eye coordination". I have no
idea where this culture-wide fixation on hand-eye coordination comes from.
It is, at the very least, odd, given that any healthy childhood -- indeed,
almost anything a child might naturally want to do (before his instincts
have been deadened by technology) -- will lead toward proper hand-eye
coordination. And, regarding the child glued to a video screen, why
aren't we also concerned about leg coordination? Or about whole-body
coordination?

But another issue is, for me, the decisive one. Physically coordinated
performance becomes admirable in the fullest sense only to the degree it
is caught up within a higher expressive purpose. Without this purpose, we
have only a descent toward the automatic and reflexive -- in other words,
toward the machine-like. By means of a higher aim, on the other hand
(think of the physical dexterity and artistic achievement of the gymnast,
dancer, and instrumentalist) the physical skills are ennobled. They rise
from the merely effective to the beautiful.

People often suggest that the manual skills gained from video games are
not unlike those required by the piano player. The comparison can be
revealing. Certainly muscular training and coordination are essential to
the pianist. Even something rather like automatic and reflexive behavior
is required. It would be impossible to play if the artist had to direct
the movement of each finger consciously.

However, this, too, is a rather shallow cliche. The pianist does in fact
direct each movement consciously. While a kind of lower-level, muscular
memory is very much at work, every motion of the fingers adapts, however
subtly, to the artistic intention of the moment. This intention may be
quite different from what it was in the last performance, depending on the
setting, the audience, the performer's mood, and so on. So the level we
like to think of as automatic is continually being disciplined and shaped
from a higher, artistic level. This power of shaping constitutes the real
mastery of the pianist.

The difference between the piano and the shoot-em-up video game is that,
for the most part, the latter trains our reflexes to operate independently
of our higher, more artistic sensibilities. The aim is merely to maximize
a score or otherwise to win. Where the pianist is pursuing a sense of a
coherent whole and is trying to produce an esthetically unified
performance, the video game player is simply responding to one damned
thing after another. Bodily grace and expressive content hardly figure
into the picture as conscious goals -- although I suspect there are few
imaginable activities where the truly superb performer is not required to
develop *some* aspects of grace.

But perhaps it is enough to ask yourself: going into an open-heart
operation, would you rather hear that your surgeon is a champion video
game player or an accomplished pianist? Well, no, I take that back.
Given the mechanistic images that increasingly influence our understanding
of every aspect of the world, including medicine, I can almost guarantee
you that many people would think the video game player likely to be more
competent -- this despite the fact that, as we saw in NF #157, the heart
is itself a musical instrument.

All this, by the way, bears on a science born of technology. Looking at a
world whose nature is as far removed from mechanism as it could possibly
be -- a world of streams and trees and clouds -- it seems we can do
nothing better than imagine infinitesimal mechanisms behind the scenes
while we ignore the higher, expressive gesturing that gives rise to,
disciplines, and masters whatever else is going on. We can, of course,
say that in our search for mechanisms we are "being rigorous and
quantitative". And it's true that a concert-goer adopting such a stance
might become wonderfully precise in measuring the pianist's intervals,
pitches, tempos, and dynamic changes. But he would miss out badly if he
mistook this disjointed data for the music.

That the world is full of music no one would deny -- no one, that is, who
is not busy philosophizing. Watch a sunset, sit beside a stream, or
wander through a field and something in you will acknowledge the music
whether you wish it or not. The mechanistic stance in science grew, not
from an original conviction that nature is not an artist, but rather from
the choice to attend to other things. The measurable parameters of
nature's performance became the sole concern. It is not surprising that,
after a few centuries of this single-minded choice, the philosophical
conviction emerged that the music is some sort of human illusion or
invention -- that nature is less an artist than a video game engineer, and
that everything going on amounts to little more than one damned thing
after another, without esthetic unity, without feeling, without meaningful
expression.

This conviction, however, is the true illusion, and I doubt whether those
raised from childhood on video games -- however wondrous their hand-eye
coordination -- have anywhere near as good a chance of escaping the
illusion as artists do.

Related article:
---------------

"The Heart's Song" in NF #157:

   http://netfuture.org/2004/Oct2104_157.html

"The Reality of Appearances" in NF #119:

   http://netfuture.org/2001/Mar2701_119.html

SLT

 =========================================================================

          INVISIBLE TOOLS, EMOTIONALLY SUPPORTIVE PALS, OR ... ?

                              Steve Talbott
                           ([log in to unmask])

Try juxtaposing these two thoughts:

** Researchers are telling us that, emotionally and intellectually, we
   respond more and more to digital machines as if they were people. Such
   machines, they say, ought to be designed so as to be emotionally
   supportive. ("Good morning, John. You seem a little down today.
   Bummer.") This is thought to be quite reasonable, since in any case
   machines are becoming ever more human-like in their capabilities.

** The common advice from other human-computer interface experts is that
   we should design computers and applications so that they become
   invisible -- transparent to the various aims and activities *we* are
   pursuing. They shouldn't get in our way. For example, if we are
   writing, we should be able to concentrate on our writing without having
   to divert attention to the separate requirements of the word processor.

The conjunction here is slightly odd. Treat machines like people, but
make them invisible if possible? Combining the two ideals wouldn't say
much for our view of people. It sounds as though we're traveling down two
rather different tracks. And, in the context of current thinking about
computers, neither of them looks particularly healthy to me. But perhaps
they can help us to explore the territory, leading us eventually to a
richer and more satisfactory assessment of the human-machine relationship.


We Need to Recognize Our Own Assumptions
----------------------------------------

Surely there is something right about Ben Schneiderman's advice when, in a
promotional interview for his book, *Leonardo's Laptop*, he states:

   Everyone needs to be alert to the harmful aspects of technology, so
   that designers produce truly elegant products that facilitate rather
   than disrupt .... Effective technologies often become "invisible"....

Who would prefer disruption to invisibility? But then, invisibility
itself is also problematic. As information technologies become ever more
sophisticated reflections of our own intelligence, it seems fair to say
that our thoughts and assumptions get built into them in an increasingly
powerful way. Their whole purpose, after all, is to embody our own
contrivings. So we meet ourselves in our machines -- which is already
reason enough for caution! Do we really want all those strivings and
contrivings -- all those thoughts and assumptions someone has cleverly
etched into the hardware and software we are using -- to remain invisible?
When employing a search engine to sift through news items, should we be
content to remain ignorant about the criteria, commercial or otherwise,
determining the engine's presentation of hits?

A vital necessity for all of us today is to remain conscious of the
assumptions and unseen factors driving our thoughts and activity. To give
up on this is to give up on ourselves and to hand society over to unruly
hidden drives. But if we must remain conscious of our own assumptions, it
can hardly be less important to prevent others from surreptitiously
planting their assumptions in us. Granting (simplistically for the
moment) that we are in some sort of conversation with intelligent
machines, it seems only natural that we would want to keep in view our
conversational partner's contribution to the dialogue. The alternative
would be for the machine to influence or control us beneath the threshold
of awareness.

Keeping the other person (or thing) in view disallows invisibility as a
general ideal. In human exchange we certainly *hope* the other person's
presence will not prove downright disruptive. But in any worthwhile
friendship neither do we want the friend simply to disappear. And we can
be sure that, at one point or another, the requirements of friendship
*will* move us disturbingly out of our path. I cannot enjoy the meanings
a friend brings into my life without risking the likelihood that some of
these meanings will collide with my own. If computers are like people, I
can hardly expect, or even want, to escape the unsettling demands they
will impose upon me to rise above myself. Thankfully, true friends can on
occasion be disrupters of the worst sort.


Complementary Errors
--------------------

But *are* computers like people? I have already suggested that they
embody many of our assumptions, and here I have now been drawing an
analogy between human-computer interactions and person-to-person
friendships. Does this mean I buy into the first view stated at the
outset -- the view that it is natural for us to respond emotionally and
intellectually to intelligent machines as if they were persons?

Not at all. If we cannot accept the ideal of machine-invisibility in any
absolute sense, neither can we accept the ideal of machine-personality.
The problem with both ideals, at root, comes from the same source: a
failure to reckon adequately with the computer as a human expression. The
two ideals simply err from opposite and complementary sides: a striving
for invisibility encourages dangerous neglect of the tendentious
expressive content we have vested in the machine; on the other hand,
trying to make the machine itself into a person mistakes the
machine-as-an-expression for the humans who have done the expressing.

I am convinced that rising above these complementary errors would
strikingly transform the discipline of artificial intelligence, not to
mention the entire character of a machine-based society.

I have commented before (NF #148) that the world is full of human
expressions that are, in part, manifestations of intelligence. The
intelligence is really there, objectively, in our artifacts -- in the
sound waves uttered from our larynxes, in the pages of text we write, in
the structure and operation of a loom, automobile, or computer. It is
impossible to doubt the objectivity, given that anyone who attends to
these artifacts can to one degree or another decipher the intelligence
that was first spoken into them. We do this all the time when we read a
book. That's just the nature of the world through and through: it is
receptive to, and a bearer of, intelligence.

But (as I have also pointed out), it is nonsense to mistake the artifact
for the artificer, or the intelligence spoken into the world as product
for the speaking as power. The endemic preoccupation with the question
whether computers are capable of human-like intelligence is one
manifestation of this nonsense. But if we are willing to step back from
this preoccupation and look at the computer in its full human context,
then we can gain a much more profound appreciation of its intelligence.
At the same time, such a contextual approach can guide us toward a more
balanced view of the human-machine relationship.


The Computer in Context
-----------------------

When, instead of trying vainly to coax signs of life from the computer as
a detached and self-subsistent piece of machinery, we examine it as an
expression of living beings, then immediately our flat, two-dimensional
picture of it becomes vibrant and vital. We see analysts reconsidering
almost every human activity, asking what is essential about it and
imagining how it might be assisted or even transformed by the elaborate
structuring potential of digital devices. We see designers and engineers
applying their ingenuity to achieve the most adequate implementation of
the newly conceived tools. And we see consumers and employees struggling
to use or not use the devices they are handed, weighing how to adapt them
to their own needs, perhaps even sabotaging them in service of higher
ends.

All this is, or at least can be, creative activity of the highest sort.
But preserving the creative element depends precisely on our *not* viewing
the computer as a merely given and independent reality. For the irony is
that only when viewed as making an independent contribution does it become
an absolutely dead weight, and therefore a wholly negative factor in human
society. Removed from the context of continual design and redesign, use
and re-imagined use, sabotage and re-invention, it presents us with
nothing but a mechanically fixed and therefore limiting syntax. To
celebrate the machine in its own right is like celebrating the alphabet,
or the ink on the page, or the grammatical structure of a great literary
text, rather than the human expression they are all caught up in.

It may seem odd to cite the computer's "fixed and limiting syntax", given
the complex and infinitely refined elaboration of logic constituting this
syntax. But that's just the problem. We find in every domain of life that
an elaborate and successful structuring of the conditions of life is not
only the glorious achievement of past effort, but also the chief obstacle
to future effort. All life is a continual development, a maturing, an
evolution, an overcoming or transformation of what is given from the past.

Owen Barfield is referring to this problem in connection with the renewal
of the expressive power of language when he writes,

   Thus, it so often comes about that the fame of great poets is
   posthumous only. They have, as Shelley said, to create the taste by
   which they are appreciated; and by the time they have done so, the
   choice of words, the new meaning and manner of speech which they have
   brought in *must*, by the nature of things, be itself growing heavier
   and heavier, hanging like a millstone of authority round the neck of
   free expression. We have but to substitute dogma for literature, and
   we find the same endless antagonism between prophet and priest. How
   shall the hard rind not hate and detest the unembodied life that is
   cracking it from within? How shall the mother not feel pain? (*Poetic
   Diction*, chapter 10)

And how shall the corporate reformer not despise the stewards of legacy
software! *This* problem only becomes greater as the inexorable drive
toward interlocking global standards gains momentum.

The attempt to find a principle of life within the computer as such,
detached from its human context, is damaging precisely because the machine
itself is almost *nothing but* the hard rind in need of cracking. The
continual process of living renewal must come from us, and from our
commitment, as designers and users, to transform the rigid syntax we have
received from the "dead hand of the past". We rightly strive for flexible
software, but there remains a crucial sense in which every piece of
software, once achieved, becomes a dead weight.


A Mechanical or Human System?
-----------------------------

I realize that many will at this point want to press the case for software
that learns or adapts or evolves. That this "learning" is one of the most
strained metaphors of all time is something that must be left for another
article. At the moment I can only barely mention the essential limitation
of the fixed syntax that governs whatever sort of learning is claimed to
be going on. There is, in every digital machine, a top, or outer, level
of syntax defining and limiting what this particular machine is. When
that level changes, we don't have learning or evolving; we have a
malfunction.

This is quite unlike a living process, whose organic nature subverts the
very idea of a top level of design in the mechanistic sense. Try
identifying discrete levels of design in any organism, and you will
quickly see the futility of it. All is mutual participation and
interpenetration. The transformation from within experienced by every
organism is transformation that leaves no level of physical structure or
process wholly exempt. A computer does not bear its own principle of life
within itself in this sense.

I am not saying that the syntactic flexibility we achieve in our computer
programs is unimportant. This achievement is an essential part of our
striving to express something living. But the striving and expressing are
*our* striving and expressing, and the living result is found in the way
we live with and employ the things we have made. It is always destructive
to become fixated upon the capabilities of the machine, as if they
themselves could be an ultimate source of personal or social renewal.

The mechanical flexibility we typically aim for is in fact a flexibility
in human-machine interaction. Remove the machine from this interactive
context, and its inflexibility immediately declares itself. We can
program all the options and alternative pathways we want into a machine,
but if it is sitting unattended in some warehouse, endlessly spewing out
the prompt, "Please select one of the following options", until its power
runs down, the flexibility we programmed into it will not be very much in
evidence. What we mostly aim for when we program computers for
flexibility is to give them a syntax that allows *us* to be as flexible as
possible.

Of course, this remark immediately gets the engineer thinking about how
the computer might be redesigned. It could, for example, respond to
decreasing power by moving around to find an electrical outlet it can plug
into, and it might scan the radio spectrum for network connections -- and
so on without end. This "without end", in fact, is what so impresses the
engineer who is thinking about the human-like potentials of the computer.

It always amazes me to see how difficult it is for such people to
recognize their own ongoing contribution to the computer's endless
development, and to distinguish this properly from the completed results
of their effort at any particular moment. For all the sophistication of
the systems analysis going into mechanical systems, we seem unable to
mount any reasonable analysis of the human-machine system, except by
reducing the human being to a mechanical element of the system.

A much more fruitful approach would be to consider the machine within its
human context. In this way we would elevate the machine, not through the
crazy imputation of emotions and thoughts to it, but rather through the
recognition that our conversation with the machine is, in the end, a
conversation with ourselves -- just as we converse with ourselves (and not
in any primary sense with paper and ink) when we read a text.

If we would truly raise the machine to our level rather than reconceive
ourselves in its terms, then we might be more naturally inclined to
ennoble our conversation with it. We would do this, for example, by
shaping the computer's outer form with the sensitivity of a sculptor, and
by deriving its frozen, internal logic from an inspired vision of this or
that human activity, just as we can abstract a bare logical structure from
an orator's high and passionate meanings. And we would recognize that
recovering worthy activity and high purpose from this frozen structure
depended upon our ability to warm it with our own passions, enlighten it
with our own meanings, enliven it with our willful intentions. And so,
finally, our fascination with the evolution of "spiritual machines" would
be transformed into our own evolving sense of spiritual responsibility for
those aspects of ourselves we invest in our mechanical creations.


Related articles:
-----------------

"From HAL to Kismet: Your Evolution Dollars at Work" in NF #149:

   http://netfuture.org/2003/Aug2803_149.html#2

"Intelligence and Its Artifacts" in NF #148:

   http://netfuture.org/2003/Aug0503_148.html

"Can Open Standards Suffocate Us?" in NF #82:

   http://netfuture.org/1999/Jan0599_82.html

"The Future Does Not Compute", chapter 3 in *The Future Does Not Compute:
Transcending the Machines in Our Midst*:

   http://www.praxagora.com/~stevet/fdnc/ch03.html


 =========================================================================

                          ABOUT THIS NEWSLETTER

NetFuture, a freely distributed newsletter on technology and human
responsibility, is published by The Nature Institute, 20 May Hill Road,
Ghent NY 12075 (tel: 518-672-0116; web: http://www.natureinstitute.org).
Postings occur roughly every four weeks. The editor is Steve Talbott,
author of *The Future Does Not Compute: Transcending the Machines in Our
Midst* (http://www.praxagora.com/~stevet/index.html).

Copyright 2004 by The Nature Institute. You may redistribute this
newsletter for noncommercial purposes. You may also redistribute
individual articles in their entirety, provided the NetFuture url and this
paragraph are attached.

NetFuture is supported by freely given reader contributions, and could not
survive without them. For details and special offers, see
http://www.netfuture.org/support.html .

Current and past issues of NetFuture are available on the Web:

   http://www.netfuture.org/

To subscribe to NetFuture send the message, "subscribe netfuture
yourfirstname yourlastname", to [log in to unmask] . No
subject line is needed. To unsubscribe, send the message, "signoff
netfuture".

If you have problems subscribing or unsubscribing, send mail to:
[log in to unmask] .

We would like to hear your reactions. Send comments about the publication
to Steve Talbott ([log in to unmask]).

This issue of NetFuture: http://netfuture.org/2004/Dec0704_159.html.

--
This message has been scanned for viruses and dangerous
content by the NorMAN MailScanner Service and is believed
to be clean.

The NorMAN MailScanner Service is operated by Information
Systems and Services, University of Newcastle upon Tyne.

************************************************************************************
Distributed through Cyber-Society-Live [CSL]: CSL is a moderated discussion
list made up of people who are interested in the interdisciplinary academic
study of Cyber Society in all its manifestations.To join the list please visit:
http://www.jiscmail.ac.uk/lists/cyber-society-live.html
*************************************************************************************

Top of Message | Previous Page | Permalink

JiscMail Tools


RSS Feeds and Sharing


Advanced Options


Archives

May 2024
April 2024
March 2024
February 2024
January 2024
December 2023
November 2023
October 2023
September 2023
August 2023
July 2023
June 2023
May 2023
April 2023
March 2023
February 2023
January 2023
December 2022
November 2022
October 2022
September 2022
August 2022
June 2022
May 2022
March 2022
February 2022
October 2021
July 2021
June 2021
April 2021
March 2021
February 2021
January 2021
December 2020
November 2020
October 2020
September 2020
July 2020
June 2020
May 2020
April 2020
February 2020
January 2020
December 2019
November 2019
October 2019
September 2019
August 2019
July 2019
June 2019
May 2019
March 2019
February 2019
January 2019
December 2018
November 2018
October 2018
September 2018
August 2018
July 2018
June 2018
May 2018
April 2018
March 2018
February 2018
January 2018
December 2017
November 2017
October 2017
September 2017
August 2017
July 2017
June 2017
May 2017
April 2017
March 2017
January 2017
December 2016
November 2016
October 2016
September 2016
August 2016
June 2016
May 2016
April 2016
March 2016
February 2016
January 2016
December 2015
November 2015
October 2015
September 2015
August 2015
July 2015
June 2015
May 2015
April 2015
March 2015
February 2015
January 2015
December 2014
November 2014
October 2014
September 2014
August 2014
June 2014
May 2014
April 2014
March 2014
February 2014
January 2014
December 2013
November 2013
October 2013
September 2013
August 2013
July 2013
June 2013
May 2013
April 2013
March 2013
February 2013
January 2013
December 2012
November 2012
October 2012
September 2012
August 2012
July 2012
June 2012
May 2012
April 2012
March 2012
February 2012
January 2012
December 2011
November 2011
October 2011
September 2011
July 2011
June 2011
May 2011
April 2011
March 2011
February 2011
January 2011
December 2010
November 2010
October 2010
September 2010
August 2010
July 2010
June 2010
May 2010
April 2010
March 2010
February 2010
January 2010
December 2009
November 2009
October 2009
September 2009
July 2009
June 2009
May 2009
April 2009
March 2009
February 2009
January 2009
December 2008
November 2008
October 2008
September 2008
June 2008
May 2008
April 2008
March 2008
February 2008
January 2008
December 2007
November 2007
October 2007
September 2007
June 2007
May 2007
April 2007
March 2007
February 2007
January 2007
2006
2005
2004
2003
2002
2001
2000


JiscMail is a Jisc service.

View our service policies at https://www.jiscmail.ac.uk/policyandsecurity/ and Jisc's privacy policy at https://www.jisc.ac.uk/website/privacy-notice

For help and support help@jisc.ac.uk

Secured by F-Secure Anti-Virus CataList Email List Search Powered by the LISTSERV Email List Manager