Needless to say, in this debate I completely agree with Rosaria.
In the more practical side of the debate (where I can contribute
more), BDI programming languages typically have ways of executing
plans simply as procedures. You can then see the "goal" as simply the
name for the "activity" that needs to be done. It's precisely when
you use what in this area is called "declarative goals" -- which is
what you call a goal -- that things become more complicated for these
languages. So Maarten's point against BDI is that not ALL human
activity is goal based. Well, those activities are precisely the ones
which are EASY to model (in a BDI or otherwise agent architecture).
Also, to make an agent irrational in a BDI programming language is
VERY EASY. It's when you NEED them to have goal-driven behaviour and
be rational that things become difficult (but interesting), and this
is addressed (or at least, I hope, facilitated) by BDI languages and
hopefully by Brahms too -- hence although you got there from a
different perspective, it isn't a different place at all.
Rafael
On 17 Jan 2007, at 16:35, Maarten Sierhuis wrote:
> A response in between Ross' disagreements with my statements.
>
> Maarten.
>
> On Jan 16, 2007, at 7:17 AM, Rosaria Conte wrote:
>
>> Sorry for a delayed contribution to this discussion, but I could
>> read this exchange only recently.
>> Apart from a general agreement on the necessity of cognitive
>> agents to model norm emergence and propagation, I would like to
>> comment on a couple of statements that I am not sure how to
>> interpret, and whether to agree with. See below both statements
>> and comments.
>>
>> Il 13-01-2007 19:26, "Maarten Sierhuis"
>> <[log in to unmask]> ha scritto:
>>
>>> “... However one has to be careful in selecting cognitive
>>> architectures as a model for large scale societal behavior.”
>>>
>> The statement that cognitive agents are needed to account for norm
>> emergence and propagation by no means implies ‘to select cognitive
>> architectures as a model for large scale societal behavior’.
>
> But, then, what does the statement mean? Can you give some examples
> of the use of cognitive agents that are not based on a cognitive
> architecture? Also, I wasn't strictly speaking of norm emergence
> and propagation, but rather about the effects of behavior of a
> large number of agents, while this agent behavior is created by
> using cognitive agents. My personal view is that norms and practice
> are closely related. Cognitive models are no practice models,
> rather they are models about what goes on "in between the ears."
> Practice models, on the other hand, are about how activities are
> culturally and socially constructed and behavior is based on
> belonging to communities of practice (i.e. it is based on shared
> understanding of normative activity).
>
>> Indeed, norm emergence and propagation does not imply that there
>> is a deliberate issuing of norms, not even an understanding of
>> this process on the part of the agents contributing to it.
>> However, for a given behavior to spread by means of norms, the
>> agents displaying it if autonomous must form and reason upon
>> normative representations.
>
> I am sorry, but this confuses me. How can it be that entities
> reason upon representations of norms, but don't issue or understand
> these norms? This sounds like Searle's Chinese Room experiment
> argument, but than for norm emergence and propagation. So is there
> a "Strong Norm Emergence" and a "Weak Norm Emergence" argument?
>
> The word "reason" implies cognition of some form, which implies a
> cognitive architecture of some form.
>
>>
>>> Cognitive models are based on a theory of human cognition,
>>
>> I am not sure I agree about this. It of course depends on what is
>> meant by cognitive models. If what is meant is mental models, than
>> of course the cognitive psychological view of the mind is at
>> stake. But cognitive modeling interacts with, if it does not
>> subsume, the study of animal and artificial minds as well.
>
> I am not aware of any cognitive agents that are not, in some way
> shape or form, based on a theory of human cognition. BDI
> architectures are based on Bratman's theory of people as planning
> agents, cognitive architectures are based on Newell or Anderson's
> cognitive theories. Neural nets are based on biological neural
> networks, however, this type of architecture should be separated
> from cognitive architectures. Cognitive architectures explicitly
> mimic cognitive brain functions, without using any neural nets, but
> using some sort of symbolic representation.
>
> It is probably my lack of knowledge about those who define a theory
> of cognition for animals, other than human, that I can't say much
> about this. But, I would say that the theory of artificial minds is
> very much related to that of the theory of human cognition. I would
> be interested in hearing about other animal cognition theories that
> are not based on human cognition theories, and are symbolic in nature.
>
>
>>>
>>> (...)
>>> More so, many scientist (e.g. neuroscience, anthropology,
>>> cognitive science) have in recent years developed counter
>>> theories to the theory of the human mind as a "symbolic copy
>>> machine."
>>>
>> Although it is not entirely clear to me what a symbolic copy
>> machine is, I do believe instead that cognitive science in general
>> has no much to say against the theory of human mind as a symbolic
>> system.
>
> To claim that cognition is based on symbolic processing, it means
> that that there is a copy function within the process, and symbolic
> structures are copied from one place to another in order to store
> and recall the symbolic structures.
>
>> However, this by no means implies a particular commitment to a
>> view of agents as necessarily conscious, ratiomorphic, and
>> deliberative.
>
> Yes it does, at least deliberative, which I would posit needs
> consciousness. I am not sure what ratiomorphic is.
>
>> A cognitive (based upon symbolic representations) view of the mind
>> should not be equalized with a strictly deliberative view of
>> agenthood.
>
> Maybe not in the field where you operate, but I would claim that in
> philosophy and cognitive psychology it does. Maybe you can give
> some examples that make your claim explicit.
>
>>
>>> ... (but, alas, not every human activity is goal-driven).
>>>
>> Of course. However, a cognitive theory of goals defines them as
>> symbolic internal representations triggering and guiding actions;
>> by no means, again, this implies that goals are also attributed
>> the property of being rational, consistent, conscious and
>> necessarily chosen for action (and therefore planned).
>
> But that is not what the goal-based theories say. More importantly,
> if one uses a BDI agent architecture (or an expert system based
> architecture, such as Jess) to model reasoning in your agents, then
> you are either implicitly or explicitly claiming that "goals are
> also attributed the property of being rational ..." Simply because
> these architectures are based on the theory that rational,
> consistent, conscious choosing of actions is planned and goal-
> based. In other words, imho, you cannot use these architectures to
> implement your agent system and then claim that your model does not
> rely on these theories. That is why we developed our own BDI-like
> architecture that is not based on these theories, but on theories
> of situated action and activity theory, which do not use the
> concept of a goal to model reasoning, and does not use goal-based
> planning to simulate perception-action and deliberation.
>
>>
>> Cheers
>> ross
>>
>
|