Loet wrote:
> 1. We share with the biological and economic sciences a model
> in which the equations themselves represent theories, and
> then additionally the program implies another (formal) theory.
> Without a theoretical expectation, one would not be able to
> specify the program.
Nor would one be able to specify the equations, right? And a
"theoretical expectation" is not a theory, correct?
> The simulation results show possible events other than the
> ones which went into the substantive theory construction that
> led to the equations because one is able to search in a
> parameter space (as different from a design space).
The same is true of *any* good theory, i.e., you should be able to
derive consequences (and predictions) for phenomena beyond those that
initially motivated the theory.
> The simulation results, however, need to be appreciated by
> the substantive discourses and this may in turn lead to
> refinements in the simulation model, i.e., an update of
> the formal (not-substantive) theory guiding the program
> formulation. Thus, a process of translation among different
> theoretical domains potentially improves the specification
> on both sides.
I think you've lost me from here on. Maybe it's your distinction between
"formal theory" and "substantive theory" -- as though formalizing
necessarily eliminates "substance"? If that's what you mean, I'd
disagree. I've heard it said that formalizing squeezes all of the
substance out of a theory. To me, formalization means using a formal
language (with clearly-defined terms) and formal logic (with rules for
deriving new statements), making the *substance* of the theory as clear
and parsimonious as possible.
> For example, the program specifies when a subroutine
> with equations representing a substantive program is
> called. In this sense the program weights the perspective
> of each theory in terms of its relative impact. Thus,
> the substantive theories "compete" in the simulation,
> and they can be expected also to compete in explaining
> the simulation results. Further formalization of the
> theories in relation to simulation results may lead to
> changing the formulation of the program, i.e.,
> improving the theory at this level.
I'm sorry, I just don't understand this--the idea that theories compete
within a simulation.
> 2. But we share with AI that we are interested not only
> in simulating how the system under study develops, but
> also in how it develops in terms of meaning as a
> construct that is reconstructed.
I'm not sure I understand this, but I don't think that I would accept it
as some sort of universal goal for sociological simulations.
> Social systems not
> process not only information (like in biology), but
> also meaning. The processing of meaning can be studied
> by using simulations.
I don't believe that all simulations in sociology must "process
meaning," but at the risk of self-referential incoherence, I'm not sure
what that means! :) I've worked on several kinds of simulations and used
many others, and only a few of them were expressly interested in
anything that might be termed "meaning processing."
> The process is in many respects
> too esoteric for an empirical design. In symbolic
> interactionism one returns to "thick descriptions",
> but these single case studies cannot be translated
> into equations. We can perhaps study these inter-human
> constructs within the simulation because this enables
> us to study other possible meanings in a virtual realm.
> The historical dimension, however, remains important
> for the sociological explanation. In this respect
> the program of social simulations is also different
> from AI.
> With kind regards,
> Loet
Best wishes,
Barry
> -----Original Message-----
> From: News and discussion about computer simulation in the
> social sciences [mailto:[log in to unmask]] On Behalf Of
> Barry Markovsky
> Sent: Friday, November 21, 2003 8:27 PM
> To: [log in to unmask]
> Subject: Re: Theory and Simulation
>
>
> A couple of points and questions are inserted below.
>
> > -----Original Message-----
> > Kathleen says
> >
> > >The issue here is not "what is generate by the program" but "the
> > >program itself"
>
> [Rosaria replies:]
> > what is generated by is often not (explicitly) represented in the
> > program. Inn this sense, the latter cannot be seen as a
> theory of the
> > former.
>
> I wouldn't say that a simulation program is supposed to be a
> "theory of" some other theory. I don't really know what that
> that would mean. It could be a "theory of" whatever it is
> that the other theory is addressing.
>
> >
> > >Again, for many AI people and logicians, current logical formalisms
> are
> > >incapable of representing core ideas in some agent models
> - such as
> > >knowing not requiring infinite regress.
> >
> > that's why it is necessary to keep distinct theory, its formal
> > expression, and its implementation. Certainly, existing logics are
> > insufficient to express a lot of things (motivations even more than
> > knowledge). They express a subset of existing theories on mental
> > states and processes. But the same is true for computational
> > implementations.
>
> I wouldn't agree that, just because a particular formal
> language is unable to express a particular idea, it cannot
> express a theory.
>
> >
> > >For these models - new "logics" are needed. Until then,
> the program
> > >itself is the logic and is the theory.
> >
> > perhaps the logic, but not the theory (see above)
> >
> > > The results generated by the simulation are the hypotheses or
> > > predictions.
> >
> > I would say they are often data in need of explanation
>
> The hang-up here may be that "explanation" can mean different
> things. I have some experience with this that may help to
> clarify. I've worked on the problem of explaining and
> predicting equilibrium distributions of resources across
> positions in an exchange network. One theory, for example,
> uses a graph theoretic method to solve for the relevant
> values. I also have a simulation that iterates through a
> series of negotiations and exchanges between individuals.
> There is a high degree of correspondence (though not perfect)
> between the simulation and the theory--and also with the
> results of empirical studies, for that matter. I would
> contend that both the simulation and the graph theory offer
> explanations for what happened in the experiments. If only
> the simulation existed, Rosaria might look at its output and
> say "these are data in need of explanation" when in fact the
> simulation does embody an explanation of a certain type--one
> that, in this case, predicts the observed output as a
> consequence of a structural arrangement of actors engaging in
> a series of decisions and exchanges, bound by certain
> contextual rules and cognitive capacities. The explanation
> that Rosaria may think is missing is something like the graph
> theory which predicates the pattern of exchange outcomes on
> the network's structural form, assuming that certain
> conditions are satisfied.
>
> In other words, the graph theory has supplied one account of
> the phenomenon--a relatively macro-level account, in this
> case. The simulation has supplied another account, a
> relatively micro-level one in this case. I prefer to think of
> the only real "data" as being that which was generated in the
> empirical experiments, and against which both the
> graph-theoretical and the simulation-theoretic accounts may
> be compared. The existence of the graph theory does not in
> any way diminish the status of the simulation as a theory.
> Only a poor fit with the empirical evidence, or the existence
> of superior alternative theory (in terms of its precision and
> breadth), can do that.
>
> >
> > >Here I would say that the simulation results are used to
> uncover or
> > >observe the processes - but the simulation model is a formal
> > >description of the process. The language in the model is just a
> > >distinct symbol system for explicit;y formulatingof the theory.
> >
> > The issue here is the explicitness. A painting by Picasso generates
> > emotions but is not a theory of them. In some sense, one
> can say that
> > emotions are incorporated into the painting, but not in an explicit
> > way.
> > >
> > >I would suggest that this is a philosphy of science question
> > involving,
> > >among other issues, as to what are the features that make a symbol
> > >system adeqaute for formulating theory.
> >
> > right. I think it would be important to have such a public
> discussion
> > somewhere.
>
> Actually, I think its more of a linguistic issue than a
> philosophical issue. What hasn't been discussed much is the
> process of assigning meaning to the symbols used in the
> simulation. Just as a discursive theory is only useful
> insofar as the meanings of its terms are shared by all of its
> "users," so too must the meanings of the terms of a
> simulation be shared by all of its users. To permit this
> requires explicit definitions for any terms that might be
> understood different ways by different people--not common
> practice among authors of informal theories.
>
> Barry Markovsky
>
> >
> > Rosaria Conte
> >
> > National Research Council, Institute of Cognitive Science and
> > Technology, V.LE Marx 15, 00137 Roma. LABSS (Laboratory of
> Agent Based
> > Social Simulation) & University of Siena - Communication Sciences -
> > "Social Psychology"
> >
> > voice:+39+06+86090210;
> > fax:+39+06+824737;
> > cell. 3355354626
> > email: [log in to unmask] - http://www.istc.cnr.it/lss/
> >
>
|