Interesting point.
I would say that another important condition for autonomy is choice.
If there is no capability to choose between more than one option to
select, then the agent can still have beliefs and goals but will not be
really autonomous, is it?
Virginia
Rosaria Conte wrote:
>Another good question, thanks Alan.
>Goals are not sufficient for autonomous agents. Another essential ingredient
>of autonomy is beliefs. An autonomous agent acts on the grounds of own goals
>and own beliefs. To state it differently, autonomous agenthood requires that
>two filters are applied to external inputs: belief filter, which decides
>whether and what input to accept as a believe, and goal filter, which
>decides whether and which input to select as a goal.
>
>ross
>
>
>Il 23-01-2007 11:34, "Alan Penn" <[log in to unmask]> ha scritto:
>
>
>
>>OK - so I take it that having a goal is necessary for agent autonomy. Is it
>>sufficient?
>>[Alan Penn]
>>
>>
>>>In the mail "Re: [SIMSOC] Newbie on the list - working on emergence of n",
>>>Alan Penn wrote:
>>>
>>>
>>>>A quick question. For an agent to be autonomous must it have a goal? In
>>>>other words is it possible to imagine a simulation with autonomous social
>>>>agents in which individual agents do not possess 'goals'.
>>>>
>>>>
>>>Alan,
>>>
>>>But what would "autonomous" mean in that context ?
>>>
>>>Briefly, it seems to me impossible to define "autonomy" as an objective
>>>and
>>>absolute notion. One cannot be autonomous per se, but only with respect to
>>>a given set of dependencies (relativity), and an observer (subjectivity).
>>>
>>>These dependencies can be broken down in two sub-categories : constraints
>>>and objects. Constraints can be seen as the "laws" of the environment in
>>>which the subject acts ("Autonomy is freedom under laws", Jean-Jacques
>>>Rousseau), and may include as well other agents' actions. Objects are the
>>>"things" with respect to which the subject (or the agent) can be described
>>>as autonomous by the observer. And these "things" can either be goals (if
>>>they are explicitly manipulated by the agent) or "tasks" (in which case,
>>>the goal might be implicit and buried in the definition of the task, but
>>>nonetheless still exist).
>>>
>>>Don't know if I made myself clear enough. Anyway, the sentence "this agent
>>>is autonomous" (or not) does not possess any meaning by itself. The
>>>correct
>>>way to put it would be : "under these constraints, and with respect to
>>>this
>>>goal/task, this agent can be described by this observer as autonomous".
>>>So,
>>>defining agents, for example in a social simulation, as autonomous,
>>>without
>>>defining their goals appears to me as an ontological impossibility (but I
>>>can be wrong).
>>>
>>>Cheers
>>>Alexis
>>>
>>>
--
Dr. Virginia Dignum
Institute for Computing and Information Sciences
Utrecht University
tel: +31-30-2539492
email: [log in to unmask]
url: http://www.cs.uu.nl/~virginia
----------------------------------------------
The world of humanity has two wings: one is woman,
the other is man. Not until both wings are
equally developed can the bird fly.
Baha-u-llah
----------------------------------------------
|