In the mail "Re: [SIMSOC] Newbie on the list - working on emergence of n",
Alan Penn wrote:
>A quick question. For an agent to be autonomous must it have a goal? In
>other words is it possible to imagine a simulation with autonomous social
>agents in which individual agents do not possess 'goals'.
Alan,
But what would "autonomous" mean in that context ?
Briefly, it seems to me impossible to define "autonomy" as an objective and
absolute notion. One cannot be autonomous per se, but only with respect to
a given set of dependencies (relativity), and an observer (subjectivity).
These dependencies can be broken down in two sub-categories : constraints
and objects. Constraints can be seen as the "laws" of the environment in
which the subject acts ("Autonomy is freedom under laws", Jean-Jacques
Rousseau), and may include as well other agents' actions. Objects are the
"things" with respect to which the subject (or the agent) can be described
as autonomous by the observer. And these "things" can either be goals (if
they are explicitly manipulated by the agent) or "tasks" (in which case,
the goal might be implicit and buried in the definition of the task, but
nonetheless still exist).
Don't know if I made myself clear enough. Anyway, the sentence "this agent
is autonomous" (or not) does not possess any meaning by itself. The correct
way to put it would be : "under these constraints, and with respect to this
goal/task, this agent can be described by this observer as autonomous". So,
defining agents, for example in a social simulation, as autonomous, without
defining their goals appears to me as an ontological impossibility (but I
can be wrong).
Cheers
Alexis
|