Hi
>> Agents are not fully autonomous, since they contextualized and even
>> possibly constrained by the environment, right?
>> If one attributes a degree of autonomy to a reactive agent, like in the
>> work on adjustable autonomy, giving the agent the chance of not
>> complying or deviating from the norm, wouldn't it suffice for the agent
>> to be autonomous?
>>
>
> I would not, unless you replace "'chance of not complying' with 'decision of
> not complying', but this takes us back to cognitive agents.
>
Does it not depend on what research aim is? In the review I posted
yesterday, Paul Thagard argues that the book "underestimates the
importance of explanation compared to prediction. In psychological and
neuroscience, computation is used in prediction, but the primary role is
in explanation by showing how postulated mechanisms can generate phenomena."
Now, some people would argue that - for many phenomena - prediction is
as much 'explanation' as you're going to get. Newton said of his laws of
gravity that "I have not been able to discover the causes of those
properties, and I frame no hypothesis... it is enough that gravity
really does exist, and acts according to the laws which we have explained."
In economics, Friedman argued for the same test: does your theory have
predictive power? All hypotheses have assumptions - but according to
Friedman, the conformity of those assumptions to 'reality' is NOT a test
of the validity of the hypothesis: the test is predictive power only. So
any theory that has a rational agent as its foundation can't be
falsified on the grounds that people aren't like that.
The obvious response there is "but economics doesn't have any predictive
power, you numpty. If their theories don't predict successfully,
economists at the World Bank just tell the country implementing it that
they didn't do enough, or did it too fast; certainly, the theory is
sound. So it's never falsifiable, in reality.
But there is an important point here: a pollster is not interested in
whether a model uses a crude stochastic process, or has finely grained
AI agents. They might say: 1. we know this type of area contains x type
of person; 69% of the time they vote this way; so we should or shouldn't
put our money there. The research aim is quite specific - and doesn't
need or want to ask WHY they vote this way, any more than NASA needs to
know why gravity works in order to get things into space.
I'm also still a newbie to modelling, but one example: I did a very
simple model of producers, workers and consumers. It needed no more than
a set of for loops, and I ran it in Matlab. Now, it could be that I
could 'get more' by giving each agent vastly more cognitive /
computational complexity than "Pick a random place to purchase. Is this
a price I'm willing to pay? Do I have the money?" followed by changing
price of product / labour accordingly. But without actually explicitly
stating what I'm trying to achieve, how would I know? Without being
explicit about this, we could play tennis all day with 'random /
decision-making'.
I'm not saying I do this yet, but I feel personally that I need to more
clearly justify why I'd use AI agents, rather than using an aggregate
approach, or just giving the agents mostly random choices - if these
seem to approximate the behaviour I'm after. Coming back to the first
quote - "the primary role is in explanation by showing how postulated
mechanisms can generate phenomena". Ptolemy's geocentric theory of
astronomy gave good enough predictions to be useful: it accounted for
the movement of the heavenly bodies. So it met this goal of explanation
- it showed a mechanism that generated what people saw and measured. But
it did this through some fairly tortuous maths (and was wrong). What
made Copernicus' system better? It was more parsimonious. It fitted the
observations better, which is to say with less need for ad hoc add-ons.
(Eventually, it got to be confirmed too, which is perhaps not something
social simulators can do very easily.)
The point? It's quite possible to come up with more than one model to
explain a phenomenon. Being still new to all this, I'm scrabbling about
to find reasons why I should make things complicated when they could be
simple.
Just to complicate matters further: Newtonian gravity has, in fact, been
falsified by relativity. But it's still a perfectly good tool for most
jobs that Earth-bound engineers need to do. So again - and this is the
key thing for me - *depending what your task is*, there's nothing wrong
in principle with simplification.
I'm not sure where I'm going with this... thinking aloud. I'll stop now...
Dan
|