Scott,
I think you 'pays your money and you takes your choice' as they say - I find
it hard to see the point of 'capturing precisely' what stakeholders tell you
if you then apply this through a model based on an 'untrue' (read imprecise)
cognitive theory. Perhaps it would be more fruitful to state explicitly what
the aims of these kind of model must be given the limitations of current
technology. Are we even close to giving our software agents the cognitive
power of an ant or a mouse? If not then what we are looking for are 'created
phenomena' out of our simulations to help us test whether our 'high road'
theories and philosophies are fatally flawed, or perhaps more to the point,
to stimulate new theories.
I use EVAS because I am interested in the way that agents with long distance
vision of their environment are affected by the morphology of the buildings
and urban blocks that impede vision and movement: how is it that physical
and spatial structures and mobile sighted agents interact in the production
of co-presence, communication etc? But I would make no pretence that the
simulated agents had anything approaching cognition, even if I knew what
that was. In analytic terms the question remains strictly bounded, but the
point of doing this is to help develop theories of much more general
relevance - how is it that urban spatial structures evolve to support or
inhibit long term persistence of communities or economies, for example?
Alan
>
> For my purposes, I want an implementation platform that will enable me
> to capture precisely some of what stakeholders, in imprecise natural
> language, tell us that they do or expect or believe. I am not very
> concerned about the truth of any cognitive theory underlying the
> implementation platform that I use. In any case, it is not hard to
> argue that formalisms and cognitive theories are background to the
> design intentions of platform implementors and do not in practice
> provide a direct theoretical or logical basis for the models we produce?
|