JiscMail Logo
Email discussion lists for the UK Education and Research communities

Help for CYBER-SOCIETY-LIVE Archives


CYBER-SOCIETY-LIVE Archives

CYBER-SOCIETY-LIVE Archives


CYBER-SOCIETY-LIVE@JISCMAIL.AC.UK


View:

Message:

[

First

|

Previous

|

Next

|

Last

]

By Topic:

[

First

|

Previous

|

Next

|

Last

]

By Author:

[

First

|

Previous

|

Next

|

Last

]

Font:

Proportional Font

LISTSERV Archives

LISTSERV Archives

CYBER-SOCIETY-LIVE Home

CYBER-SOCIETY-LIVE Home

CYBER-SOCIETY-LIVE  2002

CYBER-SOCIETY-LIVE 2002

Options

Subscribe or Unsubscribe

Subscribe or Unsubscribe

Log In

Log In

Get Password

Get Password

Subject:

[CSL]: NetFuture #139

From:

J Armitage <[log in to unmask]>

Reply-To:

Interdisciplinary academic study of Cyber Society <[log in to unmask]>

Date:

Wed, 4 Dec 2002 08:19:22 -0000

Content-Type:

text/plain

Parts/Attachments:

Parts/Attachments

text/plain (620 lines)

From: Steve Talbott [mailto:[log in to unmask]]
Sent: 03 December 2002 20:37
To: [log in to unmask]
Subject: NetFuture #139


                                 NETFUTURE

                    Technology and Human Responsibility

 =========================================================================
Issue #139     A Publication of The Nature Institute      December 3, 2002
 =========================================================================
             Editor:  Stephen L. Talbott ([log in to unmask])

                  On the Web: http://www.netfuture.org/
     You may redistribute this newsletter for noncommercial purposes.

Can we take responsibility for technology, or must we sleepwalk
in submission to its inevitabilities?  NetFuture is a voice for
responsibility.  It depends on the generosity of those who support its
goals.  To make a contribution:  http://www.netfuture.org/support.html.


CONTENTS:
---------

Editor's Note

Disconnect? (Kevin Kelly and Steve Talbott)
   Of software porting, the third eye, and C3PO

DEPARTMENTS

Correspondence
   Steve, Please Go Back to Being Who You Were (Jon Alexander)

About this newsletter

 =========================================================================

                              EDITOR'S NOTE

For reasons you will recognize as you read this issue, publishing it does
not exactly give me a sense of great success.

Meanwhile, however, three items of interest:

** We've posted to our website a wonderful paper by The Nature Institute's
   affiliate researcher, philosopher Ron Brady.  It's entitled
   "Perception: Connections Between Art and Science", and deals with the
   human contribution to the perceptual world -- a contribution occurring
   at a much more fundamental level than is usually acknowledged.  You'll
   find this paper at

      http://www.netfuture.org/ni/misc/pub/brady/index.html

** A while back I received a large and weighty package in the mail with
   "Kevin Kelly" as the return address.  I eagerly opened it and found
   myself the recipient of a complimentary copy of Kevin's new book, *Asia
   Grace*.  It's quite a remarkable book, full of glorious photographs
   Kevin took when, as a college-aged kid, he traveled through Asia.
   Apart from enabling you to marvel at the world's record for pounds of
   book per words of text (can't be far from one pound per word), the
   volume will give you many hours of lush enjoyment.  You can preview and
   purchase the book at www.asiagrace.com.

** If you didn't see my previous warning, please correct all links to
   NetFuture pages.  The part of your url reading like either of these:

      www.oreilly.com/people/staff/stevet/netfuture/
      www.oreilly.com/~stevet/netfuture/

   should now read:

      www.netfuture.org/

   The old links (of which many remain) are now out of date, and before
   long will be "Not Found".

SLT

 =========================================================================

                               DISCONNECT?

      Kevin Kelly and Steve Talbott ([log in to unmask]; [log in to unmask])

This exchange is part of an ongoing dialogue about machines and organisms.
For the previous installment see NetFuture #136:

   http://www.netfuture.org/2002/Sep2602_136.html


                      *   *   *   *   *  *  *  *  *

STEVE TALBOTT:  In the last installment of our dialogue (NF #136) you
asked, "What would you need as fully convincing evidence that machines and
organisms are truly becoming one?"

You will recall that earlier (in NF #133) I pointed out what seems to me a
crucial distinction between mechanisms and organisms:  the functional idea
of the mechanism is imposed from without (by us) and involves an
arrangement of basic parts that are not themselves penetrated and
transformed by this idea.  In the organism, by contrast, the idea (or, if
you prefer, the archetype, or being, or entelechy) works from within; it
is not a matter of fixed parts being arranged, but of each individual part
coming into existence (as this particular part with its own particular
character) only as an expression of the idea of the whole.

I illustrated this organic wholeness by describing how we read the
successive words of a text.  Almost with the first word we begin
apprehending the governing idea of the larger passage, which comes into
progressive focus as we proceed.  And this idea shines through and
transforms every individual word.  Dictionary definitions alone would make
a joke of any profound text; each word becomes what it is, with all its
qualities and connotations, only by virtue of its participation in the
meaning of the whole, only as it is infused by the whole.

Our atomistic habits of thought, of course, run counter to this
description.  We can scarcely imagine a whole except as something "built
up from" individual parts with their own self-contained character.  But
the fact is that we could never write a meaningful text, and could never
understand such a text, if the words were not caught up into a preceding
whole that transformed them into expressions of itself.

When Craig Holdrege, in his study of the sloth (NF #97), said that every
detail of the animal speaks "sloth", he was pointing to the same truth.
The fine sculpting of every bone, the character of basic physiological
processes, the smallest behavioral gesture -- all these are "shone
through" by the coherent and distinctive qualities that we can recognize
as belonging to the sloth.

So, Kevin, when you ask what would convince me that machines are becoming
organisms, certainly one prerequisite is that I would have to see that the
foregoing distinction is without basis in reality.  I would have to see
that its own idea is native to the mechanism and governs the mechanism in
the way the idea of the organism governs and shapes the organism from
within, bringing the parts into existence as expressions of itself -- or
else that organisms *fail* to show this sort of relation between part and
whole.

Now, I realize that in NF #133 your initial response to my distinction was
to deny it.  It seemed obvious to you that I was thinking of "old"
technology -- industrial-age machines -- and not things like cellular
automatons, neural nets, artificially intelligent robots, and all sorts of
other technologies that show complexly interacting elements.  I remain
hopeful, however, that your response was more a function of my brief and
inadequate effort to capture the distinction than a real disagreement.

With that hope in mind, let me explain why your counterexamples don't work
for me.  Think first of a computer.  The hardware can be implemented in
radically different materials with radically different designs.  (You're
doubtless aware of all the ways people have imagined constructing a
Universal Turing Machine.)  Then there is the programming, or software,
which defines the functional idea of the computer.  Does this program work
in the computer in the same way the idea of the organism works in the
organism?

Clearly not.  You could remove the software from one computer and install
what is essentially the same software in a wholly different computer.
Conversely, having removed the software from the first machine, you can
load a second program into it.  In the former case, you have the same
functional idea driving two computers that may be unrecognizably different
in materials and design.  In the latter case you have two completely
different functional ideas successively driving the same computer.  This
arbitrary relation between the programmatic idea and its hardware
embodiment is something you will never find in the psychosomatic unity of
an organism.  (Try putting the mind of a horse into the body of a pig!)

The relevance to my larger point is this.  If there is no horse/pig
problem with computers, it's because the software coordinates the pre-
existing elements of the hardware rather than enlivening them and bringing
them into being; and the different programs are therefore free to
coordinate the elements in different ways.  These elements are not
themselves transformed by the program from within, in the manner of words
in a text, or bones, muscle fibers and cells in a developing organism.
Nor is the program continually embodying itself in new, previously non-
existent forms of hardware as it "matures".  (If you think genetic
algorithms contradict this, then we need to talk about them.)

Does this capture the distinction I'm after a *little* better?

One other thing.  I get the feeling that you half expect me, upon
reviewing all the achievements in robotics and AI, to be stunned by the
sheer evidential weight in favor of the increasingly organic and life-like
character of mechanisms.  Rest assured:  I am impressed -- sometimes even
stunned -- by these achievements.  They reinforce my conviction that there
is no ultimate bound upon human creative potentials, and these certainly
include the possibility of housing our ever more sophisticated and subtle
ideas in mechanisms.  I see no end to this process, no limit to how life-
like our devices can become or how fully they will insert themselves into
the warp and woof of our lives.

This, in fact, is why I'm convinced that the decisive trial humanity must
now endure has to do with whether we can hold on to our own fullest
capacities so as to remain masters of our machines.  If we fail the test,
we will find that we can no longer differentiate ourselves from our
creations.  But this will not mean that machines have become organisms.
It will mean, rather, that we have continued to lose our ability to
distinguish the organism's act of creation from its products and therefore
have abdicated the very selfhood that is one with our creative powers.  We
will have succumbed to the downward pull of our machines, becoming like
them.

So what you and I are discussing is not at all a merely academic question!
I am grateful to you for your tenacity in demanding clarity from me in my
explanations.  I trust you will not relent.

                      *   *   *   *   *  *  *  *  *

KEVIN KELLY:  OK, so let's put your criteria to a test.  We'll take a few
organisms (a sparrow, a reindeer lichen, and a diatom) pull them apart,
and ask some experts if they can identify the organism -- if they can see
the whole organism -- in the parts.  And let's do the same with some
technology (a 747 plane, a book, and a watch).  We'll take them apart and
ask some experts if they can identify the technological species -- if they
can see the whole thing -- from the parts.  My guess is that the two teams
would have roughly the same degree of success, on average.

Would you agree that if they did have the same degree of success that this
would (as you seem to suggest) convince you that machines and organisms
are becoming one?

   > You could remove the software from one computer and install what
   > is essentially the same software in a wholly different computer.

Man, are you wrong about this.  Have you ever tried this?  Have you ever
spoken to *anyone* who has tried to port software for one computer onto a
wholly different computer?  They would universally tell you that it was
like "putting the mind of a horse into the body of a pig!"  There is
profound universality in computation (see my December *Wired* article) but
what this *does not mean* is that any particular implementation of it can
be moved to another matrix.  It simply never happens in practice.
Because: machines are just like organisms.

   > If there is no horse/pig problem with computers....

But there *is* a horse/pig problem, and this problems stems from the
commonality of machines and organisms as complex, dynamic systems in
reality.

   > One other thing .... I see no end to this process, no limit to how
   > life-like our devices can become or how fully they will insert
   > themselves into the warp and woof of our lives.

Now I am totally confused.  This is what I have been saying.

So let me see if I have this right.  You say that there is no limit to how
life-like our devices can become.  You admit that we'll add ever more
life-like functionality to our machines, that there is no limit to what
lessons we can extract from biology to import into machines, until
(without limit) we are able to grow and evolve them.  But while these
machine organisms will be used everywhere, and we'll depend on them the
way we depend upon organisms, and while these things look like organisms,
behave like organisms, and are used like organisms, in fact they aren't
organisms at all because they lack an unlocatable trait, a spark, a vital
something that we can't measure, can't pinpoint, and have trouble
perceiving over time because our third eye which can detect this spark of
real life is dimming.  So while we will be surrounded by vast quantities
and varieties of technology that will appear life-like to all who look and
in any way we measure, this lifeness will be an illusion because in fact
these things will lack an inner, unmeasurable quality that we -- ooops --
can no longer see.  That is why when a scientist says, I see no difference
between this man-made being and an organism, the proper response is:  that
is because you have lost Ulysses's vision.  The improper exfoliation of
life-likeness in machines has blinded your ancient sight. And if you can't
see the true inner life of life, than it must be because (aiyeee!) you
have turned into a machine.  True life recognizes true life; fake life
only recognizes fake life.  Blessed are those with true life.

Is this right?

                      *   *   *   *   *  *  *  *  *

ST:  Well, it must at least be right as a statement of how you have read
my words -- which has me very, very disappointed.  It is you, after all,
and not I who say machines "grow" and "evolve" when in fact everyone knows
we manufacture them.  And it is you who speak of an unlocatable vital
essence, when my entire effort has been to describe for you what numerous
people over the past few centuries (who have bothered to think about the
matter) have been able to recognize in organisms, wholes, parts, and
machines.

Please, please, Kevin, hold in your mind both aspects of my reiterated
claim:  (1) we can abstract a certain formal structure from our own
intelligent activity and impress something of this structure upon
mechanical devices; and (2) this impressing of ideas from without is
identifiably different from the living idea that organizes and constitutes
matter from within -- a difference recognizable in the relation between
whole and part.

Every thermostat, every printed page, every complex, electromechanical
loom or harvesting machine, every silicon chip testifies to our wonderful
ability to engrave something of the structure of our intelligence upon the
stuff of the world.  (Do you think all these are alive, more or less?  If
not, why?)  It would be insane for me to say there is some limit to this
process -- to say that at some particular point we will no longer be able
to take a next step.

But saying there is no limit to the structure we can imprint upon physical
materials is not the same as saying these materials must be alive.  I'm
frustrated that you keep trying to get me to infer life from complex
structure without giving me any reason for doing so apart from, "Gee, look
at this amazing spectrum of contraptions out there -- some of them sure
*seem* alive!"  Well, so do mechanical dolls and Aibos to some people.  Is
that supposed to be the convincing point?  Or could it be that we actually
need to think about it a little, even if this strikes you as miserably
"philosophical"?

As far as I can see, the idea of an unlocatable spark serves no role in
this conversation except to enable you to avoid discussing in its own
terms the actual distinction I've been making between organic wholeness
and mechanism.

As for asking a group of experts to pull a sparrow and airplane apart, the
issue was whether there's a different sort of relation between whole and
part in the two cases.  Are you really wanting to decide this by a
democratic vote of experts rather than through your own attempt to grasp
the substance of the matter?  And are you serious in suggesting such a
gruesome test for your experts?  Surely you realize that to pull the bird
apart is to destroy the very thing you're looking for!  "We murder to
dissect".

Your suggestion is the quintessential expression of the historical
development I mentioned earlier, whereby we have learned to ignore the
very aspects of the world that would have helped us to understand the
organism.  No wonder our culture must largely say to those who would point
to the organism, "I look, but I don't see".  The only looking we practice
is a murderous looking.  You can, if you wish, ridicule the attempt to
rise above such practice as a quest for "ancient sight", but the fact is
that whoever has not yet learned to transcend the limitations of his own
culture remains a prisoner of this culture -- a point I thought you agreed
with.

All this reminds me of the twentieth-century behaviorists, who dominated
academia with their denial of mind.  They kept proclaiming, "We don't see
it" while steadfastly refusing the only possible way of looking for it,
which was to attend to their own act of looking.  If the matter had been
decided by a vote of the experts in 1950, the cognitivist revolution
leading to the kind of computational stance you are now assuming would
never have happened.

Actually, Kevin, I suspect you could be one of the new revolutionaries we
need today, because I'm sure you yourself have an instinctive feel for the
truth of the matter.  Having witnessed the 747 being pulled apart, you
would not consider it outlandish if the plane were to be reassembled and
made to fly again.  It's just a matter of putting the parts back into the
right relationship with each other.  But if you watched the sparrow being
reassembled from its parts, you would not expect it to fly.  What's taken
flight is the inner being that enlivened it and made it an organic unity.

Remember the Star Wars robot, C3PO, lying dismembered on a table?  I'm
sure you complained of no deus ex machina when it was remanufactured; but
you ought to have complained if those were human parts on the table and
*they* were successfully "remanufactured".

There's a closely related point where I'm sure you also have sound
instincts.  An orthopedic surgeon manipulating your arm to discover a
"mechanical" defect regards the arm in a manner completely different from
when she is attending to its meaningful gestures.  Likewise, the doctor
examining your eyeball will step back in order to regard *you* when it is
time to report the results of her observation.  Your eye, face, and arm
are now taken as the unified outer expression of a whole -- an expression
of your inner being -- where before they were viewed (perhaps to the
detriment of your health) as the isolated parts of a mere mechanical
exterior.  The two ways of looking couldn't present a starker contrast.
You would in fact rebel if the doctor continued unrelentingly to objectify
you.  You tolerate it only as long as you think there's a legitimate
reason for the more external and mechanical approach, and you recognize a
difference between the two approaches.

I know; you don't need to say it:  "Some people now look into the eyes of
robots the way they look into the eyes of their friends".  Of course they
do.  Already in the 1970s there were those who projected a living
psychiatrist into Joseph Weizenbaum's ELIZA program.  This is what I meant
about losing our ability to distinguish between organisms and machines.
But in the face of disappearing capacities, are we obligated to go with
the flow?  Even if there were only one remaining person on earth who could
see colors, should he deny his color because of the prevailing blindness?
But I'm quite sure that at some level everyone (including you) still
recognizes the difference between a machine and an organism.

As for porting software between computers:  yes, I'm aware of the need for
machine-level code.  How else could the software "coordinate the elements
of the hardware" in the external way I described?  But this scarcely
alters my point:  you can take a massively complex program with its own
distinctive character (say, a connectionist AI program rather than an
expert system or "central command and control" program) and you can port
this program, with its distinctiveness largely intact, to utterly
different pieces of hardware.

Also, you ignored the other half of my example:  not only can you port the
same type of software to many different machines, but you can also drive
the same machine with many different software packages.  C3PO could have
been remanufactured with an entirely new "personality" -- or, for that
matter, with some of the character of a donkey.  So I say again:

   This arbitrary relation between the programmatic idea and its hardware
   embodiment is something you will never find in the psychosomatic unity
   of an organism.  (Try putting the mind of a horse into a pig's body!)

Finally, a look ahead.  We've been dealing very generally with the
relation between organisms and mechanisms.  We might obtain more traction
by specifically considering the human being.  Here is where the living
idea (or being or entelechy, if you prefer) of the organism lights up in a
bright, centered focus of self-consciousness.  In this self-consciousness
we certainly have no obscure trait requiring your "third eye" to perceive.
Rather, we have what is most immediate and undeniable, what is as close to
us as our most intimate selves -- the inescapable starting point for
anything we could possibly build or even hypothesize.

The problem of consciousness is a crucial stumbling block for the AI
project.  This is because intelligence as inner *activity* (as opposed to
the various outward results that always presuppose the activity) is
inseparable from consciousness, and we have no reason to think we can
endow any current or conceivable machine with consciousness.

                      *   *   *   *   *  *  *  *  *

KK:  In the end, Steve, we are just going to have to agree to disagree.  I
feel our conversation is circling back to itself, without covering any new
ground at this point.  Whatever evidence you supply that we can't ever
make living machines (or minds) I reject as shortsighted, and whatever
evidence I supply that this is possible you reject as irrelevant.

At this point, I think we should let the question lie.  It will be proven
one way or the other in time.  Unfortunately for me, I don't expect
artificial consciousness in my lifetime.

So for the moment (my lifetime) I will have to agree with you.  So I'll
state that we can tell the difference between machines and organisms now.
But what this means is that if *by some weird breakthrough*, nerds were
able to make in my lifetime a machine that 90% of humans thought was
conscious, or an artificial being that 90% of humans thought was alive,
then I will be pleasantly surprised, and you ... you would be what?  In
the 10% group who said it was all an illusion, or who said it didn't
really matter, or who suspected a hoax?  I'm not sure. I suspect you would
try to define the label away, since what we call it is a matter of words
and definitions anyway.  (The history of artificial life and mind is a
history of redefining life and mind.)

But I am not saying this to try to convince you, because I have just
agreed that I can't do that, and that for the sake of this argument I
agree with you within my lifetime.  I am only pointing out that you being
right doesn't change much, but if I am right, then it changes almost
everything.  Now, one could say the same thing about discovering an ET
intelligence:  however the fact that it would be momentous does not mean
that it is probable or likely.  But few would say encountering an alien
was impossible (on any timescale), which is what I think I hear you say
about AI and A-life.  (Part of what I am suggesting is that we will
encounter an alien being on this planet -- one that we make ourselves.)  I
mention this asymmetry only to indicate that when there is such a high-
impact it will pay to monitor it closely.

So I think I'd like to end my part in this conversation about the
relationship between machines and life with this suggestion.  I will
continue to rehearse in my mind the possibility that the demarcation
between the made and the born remains forever (not so hard for me because
I don't expect it to vanish completely in my lifetime); at the same time
you might try rehearsing what life (and your life and philosophy) would be
like if the border disappeared forever.

That's not a challenge, only a genuine suggestion for contemplation.  In
the meantime, perhaps another topic will come along that can engage us and
move our understanding forward.

                      *   *   *   *   *  *  *  *  *

ST:  So be it, Kevin, although this saddens me.

I will round out my own contribution to this discussion by answering your
question about what my response would be if ninety percent of my fellows
took a robot to be alive.  The obvious and inescapable answer: it would
depend on my understanding of robots and living things.  To the extent I
had some understanding, opinion polls would be irrelevant.

It's true that "anything might happen" is an appropriate expectation
whenever we lack all insight.  (A dragon might swallow the sun; a pot of
tepid water might spontaneously boil over.)  But the whole point of
science is to gain enough understanding of the essential principles of a
situation, however subtle they may be, so that we are no longer reduced to
saying "anything might happen".

In this regard, I've been puzzled by your preference for a kind of gut-
feeling populism, in which you are fortified by your subculture's common
hope that tomorrow anyone might walk through the door, including a living
robot.  Maybe the hope is justified, or maybe not, but the only way to get
a firmer grip on the situation is to deepen our understanding of living
beings and mechanisms.  To say "let's just keep building these things and
see what happens" does little good if we fail to understand what we have
built.  We merely "discover" what we expected to find all along.

There are, after all, ways to pursue the key issues.  The huge mechanist-
vitalist controversy focused on questions not unlike those you and I have
been discussing -- and, within mainstream science at least, the mechanists
came away confident that they had vanquished the vitalists for good.
(What's needed, I think, is to revisit that debate without the Cartesian
assumptions by which both sides were bound.)

All this may help you see why I'm uncomfortable with your repeated
suggestion that anyone who attempts to discuss the issues in substantive
terms must be engaging in mere empty play with definitions.  He may, of
course, but the charge needs to be demonstrated, not used as a catch-all
means of dismissal.

In any case, Kevin, I do want to say that I've benefited a great deal from
our vigorous interactions, and I thank you for your willingness to
participate.  It's been bracing -- and, for me, humbling at times.  I've
learned, among other things, how easily my most deeply felt words can
prove merely obscure to an extremely intelligent reader.  I've often had
the feeling, "Well, Steve, you sure blew that one.  Back to the drawing
board".

But, on a happier note, I'd like to issue you a standing invitation:  if
you wish to respond to anything I've just now said -- or anything I say in
the future -- the pages of NetFuture will be open to you.

 =========================================================================

                              CORRESPONDENCE


Steve, Please Go Back to Being Who You Were
-------------------------------------------

From:  Jon Alexander <[log in to unmask]>

Dear Steve,

I've read your publication only occasionally over the past few years.
This has given me perhaps some of the sort of insight that one gets when
visiting a relative who does not appear nearly as well as on the last
visit.

You began with a very sensitive, nuanced exploration.  You now have all of
the invective and rhetorical point making and position defending of a true
believer in an entrenched ideological position.

I find this development sad.  You were stardust, you were golden, you
began with such great promise.  Please go back to re-read some of those
early pieces -- and try to find your way back to the garden.

Best wishes,
Jon

Dr. Jon Alexander, Associate Professor, Political Science & International
Affairs, and immediate past President, International Sociological Assoc.,
Research Committee #26: Sociotechnics -- Sociological Practice." Carleton
University, 1125 Col. By Dr., Ottawa, Canada, K1S5B6 613-520-2797

                      *   *   *   *   *  *  *  *  *

Jon --

Actually, my own experience of the matter is that I am more nuanced,
gentler, more open, and more useful to readers than I was in my earlier,
"angrier" years.  (Not that I'm unwilling to call nonsense "nonsense" --
as in the "Mindlessness and the Brain" piece in NF #138, which I assume
triggered your response.)  Odd how we could have such utterly different
perceptions of the matter!  Could it be (I don't pretend to know) that as
you have progressively discovered the underlying convictions from which
those earlier writings arose, you have simply found these convictions not
to your liking?  Are you mistaking your discomfort over a point of view
for narrowness on the part of the one who presents that point of view?
Are you disliking the very fact that a person can have a well-defined
point of view?

Only you can answer.  But since your brief note, serving only a negative
function, provided little guidance as to my actual offenses, I thought it
not unfair to respond to your severe observation with some equally severe
questions.  I trust that both of us have only the best of intentions.  And
that both of us can benefit from the self-reflection this sort of an
exchange seems to call for.

Steve

 =========================================================================

                          ABOUT THIS NEWSLETTER

NetFuture, a freely distributed newsletter dealing with technology and
human responsibility, is published by The Nature Institute, 169 Route 21C,
Ghent NY 12075 (tel: 518-672-0116; web: http://www.natureinstitute.org).
Postings occur roughly every three or four weeks.  The editor is Steve
Talbott, author of *The Future Does Not Compute: Transcending the Machines
in Our Midst* (http://www.praxagora.com/~stevet/index.html).

Copyright 2002 by The Nature Institute.

You may redistribute this newsletter for noncommercial purposes.  You may
also redistribute individual articles in their entirety, provided the
NetFuture url and this paragraph are attached.

NetFuture is supported by freely given reader contributions, and could not
survive without them.  For details and special offers, see
http://www.netfuture.org/support.html .

Current and past issues of NetFuture are available on the Web:

   http://www.netfuture.org/

To subscribe to NetFuture send the message, "subscribe netfuture
yourfirstname yourlastname", to [log in to unmask] .  No
subject line is needed.  To unsubscribe, send the message, "signoff
netfuture".

Send comments or material for publication to Steve Talbott
([log in to unmask]).

If you have problems subscribing or unsubscribing, send mail to:
[log in to unmask] .

************************************************************************************
Distributed through Cyber-Society-Live [CSL]: CSL is a moderated discussion
list made up of people who are interested in the interdisciplinary academic
study of Cyber Society in all its manifestations.To join the list please visit:
http://www.jiscmail.ac.uk/lists/cyber-society-live.html
*************************************************************************************

Top of Message | Previous Page | Permalink

JiscMail Tools


RSS Feeds and Sharing


Advanced Options


Archives

May 2024
April 2024
March 2024
February 2024
January 2024
December 2023
November 2023
October 2023
September 2023
August 2023
July 2023
June 2023
May 2023
April 2023
March 2023
February 2023
January 2023
December 2022
November 2022
October 2022
September 2022
August 2022
June 2022
May 2022
March 2022
February 2022
October 2021
July 2021
June 2021
April 2021
March 2021
February 2021
January 2021
December 2020
November 2020
October 2020
September 2020
July 2020
June 2020
May 2020
April 2020
February 2020
January 2020
December 2019
November 2019
October 2019
September 2019
August 2019
July 2019
June 2019
May 2019
March 2019
February 2019
January 2019
December 2018
November 2018
October 2018
September 2018
August 2018
July 2018
June 2018
May 2018
April 2018
March 2018
February 2018
January 2018
December 2017
November 2017
October 2017
September 2017
August 2017
July 2017
June 2017
May 2017
April 2017
March 2017
January 2017
December 2016
November 2016
October 2016
September 2016
August 2016
June 2016
May 2016
April 2016
March 2016
February 2016
January 2016
December 2015
November 2015
October 2015
September 2015
August 2015
July 2015
June 2015
May 2015
April 2015
March 2015
February 2015
January 2015
December 2014
November 2014
October 2014
September 2014
August 2014
June 2014
May 2014
April 2014
March 2014
February 2014
January 2014
December 2013
November 2013
October 2013
September 2013
August 2013
July 2013
June 2013
May 2013
April 2013
March 2013
February 2013
January 2013
December 2012
November 2012
October 2012
September 2012
August 2012
July 2012
June 2012
May 2012
April 2012
March 2012
February 2012
January 2012
December 2011
November 2011
October 2011
September 2011
July 2011
June 2011
May 2011
April 2011
March 2011
February 2011
January 2011
December 2010
November 2010
October 2010
September 2010
August 2010
July 2010
June 2010
May 2010
April 2010
March 2010
February 2010
January 2010
December 2009
November 2009
October 2009
September 2009
July 2009
June 2009
May 2009
April 2009
March 2009
February 2009
January 2009
December 2008
November 2008
October 2008
September 2008
June 2008
May 2008
April 2008
March 2008
February 2008
January 2008
December 2007
November 2007
October 2007
September 2007
June 2007
May 2007
April 2007
March 2007
February 2007
January 2007
2006
2005
2004
2003
2002
2001
2000


JiscMail is a Jisc service.

View our service policies at https://www.jiscmail.ac.uk/policyandsecurity/ and Jisc's privacy policy at https://www.jisc.ac.uk/website/privacy-notice

For help and support help@jisc.ac.uk

Secured by F-Secure Anti-Virus CataList Email List Search Powered by the LISTSERV Email List Manager