> There are then goal-directed (Soar, Jack, etc.) and activity-oriented
> (Brahms?) platforms. I would be interested to learn about the
> relative
> virtues of these and other _types_ of platforms for capturing emergent
> social and individual behaviour. I guess what I am after here is a
> discussion of the relationship between, on the one hand, different
> modelling paradigms hardwired into various platforms and, on the other
> hand, support for modelling the features of social processes
> leading to
> the emergence of norms and beliefs.
Well, I'm not sure the question is what do these platforms offer FOR
modelling social aspects of a multi-agent system e.g. groups and
roles -- if so I already mentioned it's not much but preliminary work
is done and possibly practical. Brahms is an exception in that a
particular organisational model has been developed over the years,
but then you have to be happy in not using a declarative language,
which for me is important (for various practical, philosophical, but
also technical reasons). But I think the question was: why use any of
the cognitive agent platforms if the user is interested in the
emergence of social phenomena. Or, what do I (as a social simulation
researcher) get out of it by using such platforms (assuming you are
as brave as Rui in delving into this relatively new territory).
Personally, my motivation for pushing this line or approach was a
paper by Castelfranchi (The theory of social functions: Challenges
for computational social science and multi-agent learning. Cognitive
Systems Research 2(1):5-38), where he puts forward the idea that only
social simulation with cognitive agents ("mind-based social
simulations" as he calls it) will allow the study of agents' minds
individually and the emerging collective actions, which ***co-evolve
determining each other***. So the question is precisely that if you
are interested in the SOCIAL phenomena, you can't get away without
being also interested in the cognitive processes which allowed the
social process to emerge. To avoid too much philosophical debate,
mind the conditional: IF you agree that this is the case, then you
don't have much of an option than having a symbolic representation of
the agents' mind. All of this is to answer the question: what you get
by using those platforms is precisely a "mind" to look at. When you
find the social phenomena you were looking for, you can "inspect the
agents' minds" a say: oh, so this is what this and that agent
believed and wanted to achieve and such and such were their final
know-how which led them to this nice social behaviour I got.
To now address Alan's question as to saying that if you commit to a
wrong model of human cognition then you have no chance of trusting
your simulation... The point missed, also by Maarten, is that by
using this goal-based (or BDI-based, or whatever you wanna call them)
platforms you don't necessarily commit to a philosophical position on
the human mind. The BDI theory has been transformed in a practical
style of programming, which gives you nice symbolic representations
for what agents believed (from perception and communication), what
they were trying to achieve or simply do (goals), and the know-how
they had (a library of plans -- note though that there is no planning
from first principles in those platforms!). But as with any
programming language, YOU program it do pretty much anything that's
computationally tractable -- all you get is better abstractions for
design and implementation than you get by using object orientation.
If you think you need to "look at the minds" and that would be useful
for what your hypothesis are, I can't see why you shouldn't try those
platforms (provided you have some programming skills, of course).
Also, they are all Java based so you can carry on using the Java
stuff you have. Just to exemplify this point further: whether there
are philosophical objections to Speech Act theory or not, it cannot
be denied that they proved very useful for agent communication (and
also used in some of the goal-based platforms, btw). The same applies
to Rao & Georgeff's theory based on Bratman's idea, in my opinion.
It's just us computer scientists trying to make things practical by
stripping down the philosophical theories. So when Maarten stripped
down theories of "cognition in practice" he ended up providing
programmers pretty much the same computational conceptual basis of
BDI platforms. Which is quite interesting (if proven right, as he
obviously disagrees).
Rafael
|