A couple of points and questions are inserted below.
> -----Original Message-----
> Kathleen says
>
> >The issue here is not "what is generate by the program" but "the
> >program itself"
[Rosaria replies:]
> what is generated by is often not (explicitly) represented
> in the program. Inn this sense, the latter cannot be seen as
> a theory of the former.
I wouldn't say that a simulation program is supposed to be a "theory of"
some other theory. I don't really know what that that would mean. It
could be a "theory of" whatever it is that the other theory is
addressing.
>
> >Again, for many AI people and logicians, current logical formalisms
are
> >incapable of representing core ideas in some agent models - such as
> >knowing not requiring infinite regress.
>
> that's why it is necessary to keep distinct theory, its
> formal expression, and its implementation. Certainly,
> existing logics are insufficient to express a lot of things
> (motivations even more than knowledge). They express a subset
> of existing theories on mental states and processes. But the
> same is true for computational implementations.
I wouldn't agree that, just because a particular formal language is
unable to express a particular idea, it cannot express a theory.
>
> >For these models - new "logics" are needed. Until then, the program
> >itself is the logic and is the theory.
>
> perhaps the logic, but not the theory (see above)
>
> > The results generated by the simulation are the hypotheses or
> > predictions.
>
> I would say they are often data in need of explanation
The hang-up here may be that "explanation" can mean different things. I
have some experience with this that may help to clarify. I've worked on
the problem of explaining and predicting equilibrium distributions of
resources across positions in an exchange network. One theory, for
example, uses a graph theoretic method to solve for the relevant values.
I also have a simulation that iterates through a series of negotiations
and exchanges between individuals. There is a high degree of
correspondence (though not perfect) between the simulation and the
theory--and also with the results of empirical studies, for that matter.
I would contend that both the simulation and the graph theory offer
explanations for what happened in the experiments. If only the
simulation existed, Rosaria might look at its output and say "these are
data in need of explanation" when in fact the simulation does embody an
explanation of a certain type--one that, in this case, predicts the
observed output as a consequence of a structural arrangement of actors
engaging in a series of decisions and exchanges, bound by certain
contextual rules and cognitive capacities. The explanation that Rosaria
may think is missing is something like the graph theory which predicates
the pattern of exchange outcomes on the network's structural form,
assuming that certain conditions are satisfied.
In other words, the graph theory has supplied one account of the
phenomenon--a relatively macro-level account, in this case. The
simulation has supplied another account, a relatively micro-level one in
this case. I prefer to think of the only real "data" as being that which
was generated in the empirical experiments, and against which both the
graph-theoretical and the simulation-theoretic accounts may be compared.
The existence of the graph theory does not in any way diminish the
status of the simulation as a theory. Only a poor fit with the empirical
evidence, or the existence of superior alternative theory (in terms of
its precision and breadth), can do that.
>
> >Here I would say that the simulation results are used to uncover or
> >observe the processes - but the simulation model is a formal
> >description of the process. The language in the model is just a
> >distinct symbol system for explicit;y formulatingof the theory.
>
> The issue here is the explicitness. A painting by Picasso
> generates emotions but is not a theory of them. In some
> sense, one can say that emotions are incorporated into the
> painting, but not in an explicit way.
> >
> >I would suggest that this is a philosphy of science question
> involving,
> >among other issues, as to what are the features that make a symbol
> >system adeqaute for formulating theory.
>
> right. I think it would be important to have such a public
> discussion somewhere.
Actually, I think its more of a linguistic issue than a philosophical
issue. What hasn't been discussed much is the process of assigning
meaning to the symbols used in the simulation. Just as a discursive
theory is only useful insofar as the meanings of its terms are shared by
all of its "users," so too must the meanings of the terms of a
simulation be shared by all of its users. To permit this requires
explicit definitions for any terms that might be understood different
ways by different people--not common practice among authors of informal
theories.
Barry Markovsky
>
> Rosaria Conte
>
> National Research Council, Institute of Cognitive Science and
> Technology, V.LE Marx 15, 00137 Roma. LABSS (Laboratory of
> Agent Based Social Simulation) & University of Siena -
> Communication Sciences - "Social Psychology"
>
> voice:+39+06+86090210;
> fax:+39+06+824737;
> cell. 3355354626
> email: [log in to unmask] - http://www.istc.cnr.it/lss/
>
|