Print

Print


Marcio,
I'd like to comment on one thing you wrote (see below).

It is quite easy to create software that learns from experience.
EG: Programs written in Lisp and many related computer languages are
literally able to rewrite themselves.
The problem is that a key feature of all known techniques so far
require hardwiring into the software certain immutable structures.
These structures, by their immutability, make the software at most an
idiot-savant.

Learning as symbol manipulation is not a problem, because our brains
work strictly by symbol manipulation (best guess we have so far) of
hierarchical/recursive mental models (e.g. our "self-image" is a model
we have of ourselves, which exists in our minds which contains a model
of reality. And the model of ourselves contains, to an extent, a model
of the model of reality, and so on.)  Of course, these models are not
mathematical or logical in nature, but they do obey the laws of
nature, like a computer does.  Also, these models are woefully
incomplete compared to the models we consciously build of reality
(e.g. physics, psychology, etc) yet they seem ample to keep us going.
I say our brains work by symbol manipulation because all it has to
work with are symbols of things.  What else could it possibly do?

Neural networks are an attempt to create a unit of computation that is
like the brain's "wetware." What is interesting about NNs is that one
cannot predict how a network will form upon exposure to stimuli.  That
is, the "programmer" doesn't so much program a NN as it does select
the inputs/outputs to train it.  That changes the entire notion of the
role of the programmer to something more akin to that of a teacher.
The problems with NNs are again issues of scale: (a) there are no NN
systems that allow connections between neurons to change over time,
even though that has been shown to happen in the human brain, (b)
inputs to NNs are limited to what can be easily input to a computer -
which is orders of magnitude less data than what is available to a
human brain, (c) we cannot make NNs as dense as real neurons (yet) so
we cannot build a human-brain-sized NN due to the size constraints
involved.

You say that thinking is symbol manipulation.  I say: we have no idea
what thinking is, and that to exclude the possibility that it is just
symbol manipulation is just as narrow as pronouncing that thinking is
definitely nothing more than symbol manipulation.

You say designing involves activities that are complex.  I say we do
not know just what those complexities are, and that *that* is why they
seem complex.  Once we do understand them, and I have no doubt that we
will, eventually, they will seem rather simple.

Just some random comments.
Cheers.
Fil

On 20 June 2011 05:18, marcio rocha <[log in to unmask]> wrote:
> Dear Ken and Filippo
>
> Thank you for sharing your thoughts and provide such rich discussion.
> [...]
> Returning to the machines, I think it is not difficult to program a machine
> to learn from the experience. A chess-playing computer can easily be
> programmed to incorporate new moves from the observation of your opponent. The
> problem is that learning is a floating concept and in this case reduces
> learning the simple manipulation of symbols. Manipulate symbols do not mean
> to think. In addition there are complex questions that involve our existence
> (the self, emotion, desires, imagination, free will, etc.) that are directly
> related to our creative ability, decision-making, etc.
> [...]
>
> Cheers...
>
> Marcio Rocha
>


-- 
\V/_
Filippo A. Salustri, Ph.D., P.Eng.
Mechanical and Industrial Engineering
Ryerson University
350 Victoria St, Toronto, ON
M5B 2K3, Canada
Tel: 416/979-5000 ext 7749
Fax: 416/979-5265
Email: [log in to unmask]
http://deseng.ryerson.ca/~fil/