>The issue here is not "what is generate by the program" but "the
what is generated by is often not (explicitly) represented in the
program. Inn this sense, the latter cannot be seen as a theory of the
>Again, for many AI people and logicians, current logical formalisms
>are incapable of representing core ideas in some agent models - such
>as knowing not requiring infinite regress.
that's why it is necessary to keep distinct theory, its formal
expression, and its implementation. Certainly, existing logics are
insufficient to express a lot of things (motivations even more than
knowledge). They express a subset of existing theories on mental
states and processes. But the same is true for computational
>For these models - new "logics" are needed. Until then, the program
>itself is the logic and is the theory.
perhaps the logic, but not the theory (see above)
> The results generated by the simulation are the hypotheses or predictions.
I would say they are often data in need of explanation
>Here I would say that the simulation results are used to uncover or
>observe the processes - but the simulation model is a formal
>description of the process. The language in the model is just a
>distinct symbol system for explicit;y formulatingof the theory.
The issue here is the explicitness. A painting by Picasso generates
emotions but is not a theory of them. In some sense, one can say that
emotions are incorporated into the painting, but not in an explicit
>I would suggest that this is a philosphy of science question
>involving, among other issues, as to what are the features that make
>a symbol system adeqaute for formulating theory.
right. I think it would be important to have such a public discussion
National Research Council, Institute of Cognitive Science and
Technology, V.LE Marx 15, 00137 Roma.
LABSS (Laboratory of Agent Based Social Simulation)
& University of Siena - Communication Sciences - "Social Psychology"
email: [log in to unmask] - http://www.istc.cnr.it/lss/