Print

Print


 

Nick,

 

Validation is increasingly an issue in ABM as it tackles more and more empirical case studies. Your situation, as described, highlights what I consider to be an important mismatch (dare I say différend?) between what might be considered ‘traditional’ scientific modelling and modelling in the social sciences (or more generally in complex systems).

 

What it sounds like you have done is augmented some data you have amassed with ‘prior knowledge’ about the system, and used computer simulation to find out, in a logically consistent way, what happens. I don’t think it is a fair question, if that is what your reviewer has asked, to ask whether that is ‘valid’ in a traditional modelling sense – e.g. what is your AIC or BIC? (Ecologists, who are more quantitatively minded, apparently fight over which is the better metric, but see https://doi.org/10.1111/2041-210X.12541.)

 

The ‘validity’ of the process you undertook could be critiqued for (a) your choice of data; (b) the formalization of your knowledge you implemented, but otherwise seems to have been a rigorous scientific endeavour from what you briefly describe. ‘Validity’ is, pragmatically, a question of whether you ‘believe’ or ‘trust’ what the model’s outputs say. In your case, I might not trust your data, or your formalization of knowledge, or, if I were particularly suspicious, the engine you used to generate the model’s outcome given the data and program.

 

For traditional modellers, who bend sheets of mathematics around their data until it doesn’t look like it has over- or under- fit it too much, this question is resolved into a series of metrics they then use to compare who is better. I think in ABM we do a bit more than that, and need to develop the means to articulate better the value of (b) above, the formalization of knowledge. With my colleague Doug Salt, I had a stab at it here FWIW: https://doi.org/10.1007/978-3-319-66948-9_8

 

Hassan et al.’s article in JASSS on forecasting (http://jasss.soc.surrey.ac.uk/16/3/13.html), summarizing Armstrong’s book on the subject (see table 1 in Hassan et al.) for an ABM context is revealing in that numbers of the guidelines are social processes (EI1, EI2, EI3, A1 and A2 especially, but in collaborative modelling exercises most of the other guidelines in that table could easily be the outcome of workshops, focus groups, interviews or other social processes). The general point, I think, is that the extent to which ‘validation’ of a model is something objective, especially in the social sciences, is constrained. But this is a matter of controversy in the ontology literature, where there is discussion about whether ontologies are ‘shared’ (I found a nice blog article by someone else who clearly got a dodgy review on the topic here: https://keet.wordpress.com/2017/01/20/on-that-shared-conceptualization-and-other-definitions-of-an-ontology/) and ‘discovered’ or ‘constructed’ – but this is fundamental philosophical territory you can hardly be expected to resolve in your article.

 

To answer your direct question, then, the peer review process is one ‘validation’ in the sense of checking your formalization of knowledge and use of data. (Is your model open source?) But you could also do a couple of interviews with domain experts where you describe your model and ask their opinion of it, or ask them to talk about what they would include in a model of the situation you are studying, and compare that with what you did include.

 

But if the reviewer is *only* going to be convinced by a bunch of statistics showing that you’ve numerically reproduced reality to within some ‘acceptable’ measurable degree, then they are Reviewer 2. Of course, if you had empirical data, and some measure of your model’s ability to reproduce it (bearing in mind https://doi.org/10.1126/science.1116681 and http://jasss.soc.surrey.ac.uk/10/4/2.html), that would still be useful information; but, I think, far from being the whole story of whether what you have done is ‘valid’ or ‘scientific’, to the extent that the lack of such information was a meaningful obstacle (on its own) to the publication of your work. A good editor would use their discretion in such a situation…

 

I’d also highlight Oreskes et al.’s (1994) critique of validation in Science (https://doi.org/10.1126/science.263.5147.641) which points out that in open systems, the traditional logic of validation (all good models fit the data, my model fits the data, therefore my model is a good model) affirms the consequent.

 

Gary

 

From: News and discussion about computer simulation in the social sciences <[log in to unmask]> On Behalf Of Nicolas Malleson
Sent: 12 September 2018 08:03
To: [log in to unmask]
Subject: [SIMSOC] Validating ABMs in the absence of data

 

Hi SIMSOC,

I was wondering if anyone has any thoughts/advice about a difficulty that I’m having with validating a model. This is in response to (very fair) comments by reviewers on a paper that is under review, so I will talk about the problem in general terms. I think the discussion should be of interest to others on the list.

Colleagues and I have built a spatially-realistic agent-based model with agents who move around. It’s based on a real urban area. We have used an a-survey to calibrate the behavioural parameters, such that the agents behave in a way that is consistent with the results of the survey. The survey is national, so not specific to our study area. We put the agents into a virtual environment, let them go, and see what happens.

The reason for creating this model in the first place is that we have no data on the spatial behaviour of the real individuals in our study area. So we’re hoping that by implement behaviour that is consistent with the results of the survey, the agents will give us some insight into the real dynamics of the case study area.

But how do we validate the model? Assume that there are no empirical data available for our study area (it is possible to try to stand on the road and talk to people, but this is probably out of scope). What should an aent-based modeller do when they have an empirical model but no empirical validation data??

All the best,
Nick





Dr Nick Malleson
Room 10.114 (Manton building)
School of Geography, University of Leeds
[log in to unmask]
http://nickmalleson.co.uk/
0113 34 35248


########################################################################

To unsubscribe from the SIMSOC list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=SIMSOC&A=1



The James Hutton Institute is a Scottish charitable company limited by guarantee.
Registered in Scotland No. SC374831
Registered Office: The James Hutton Institute, Invergowrie Dundee DD2 5DA.
Charity No. SC041796



To unsubscribe from the SIMSOC list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=SIMSOC&A=1