If I may add to Dawn's take-home messages, and reading between the lines of Ernesto's comment. I am guessing that when Ernesto talks about data-driven models he is also alluding to a lack of behaviour/incentive driven mechanisms of interaction (correct me if I'm reading this wrong please); and such mechanisms often come from relevant social theories (not just from adopting exogenous encounter rates and network topologies). In the context of trying to model entire societies, of course, these theories have to be diverse and have to come from various disciplines. Thus, the lesson suggested here is the relevancy of understanding and modelling social interactions beyond stylized 'nudge' ideas.

Best,
Omar


_____________________________
Omar A Guerrero, PhD
ESRC-Alan Turing Institute Fellow
The Alan Turing Institute
@guerrero_oa
_____________________________
Senior Research Fellow
Department of Economics
University College London



On Wed, Mar 18, 2020 at 5:45 PM Dawn Parker <[log in to unmask]> wrote:
Although I haven’t reviewed the model in detail, based on Ernesto’s comments there may be three take-home messages.  It would be great if the ABM community to get a short editorial out on these issues, so we don’t go down with the ship of a poorly done microsimulation (and it is not the fault of the genre as there are excellent microsimulation models).

1)  Ensemble modelling is of the utmost importance for high-impact situations.  We should be developing and comparing systems dynamic, ABM, statistical/machine learning, and microsimulation models.  Policy makers should have access to multiple modelling approaches.

2)  Validation is critical, and model that fail validation should be interrogated to try to identify their deficits and be improved.  This is what MOST modellers believe in and do, even if our methods are evolving.  Sensitivity analysis is also critical, even beyond the range of current data estimates.  I always use stats canada demographic projection reports with students to show them great examples of both.  

3)  My own soapbox—our MIRACLE project focused on the idea that in addition to sharing model code, we should share model output data (outputs from global sensitivity analysis in particular) and the algorithms used to analyze those data.  We should also create and share full modelling metadata—provenance and description of the structure of both input and output metadata.  We had a nice prototype working where members of a user group could explore output data, post, share, and comment on analysis algorithms.  It’s not working, but I’m hoping that through CoMSES, we might be able to create means of sharing links to output data and also full modelling process metadata.  

None of what I’m saying is new to this community, or I expect even controversial—but maybe this is a very good time to speak up more publicly about these points as a community.

Dawn

On Mar 18, 2020, at 5:41 AM, Hume Winzar <[log in to unmask]> wrote:

Insightful, Ernesto.
Thank you



From: News and discussion about computer simulation in the social sciences <[log in to unmask]> on behalf of Hofstede, Gertjan <[log in to unmask]>
Sent: Wednesday, March 18, 2020 8:33:28 PM
To: [log in to unmask] <[log in to unmask]>
Subject: Re: [SIMSOC] Disease models at the WP
 
Thank you Ernesto. Hear, hear.

Let's revisit this when the dust has settled.

Gert Jan

-----Original Message-----
From: News and discussion about computer simulation in the social sciences <[log in to unmask]> On Behalf Of Ernesto Carrella
Sent: woensdag 18 maart 2020 10:15
To: [log in to unmask]
Subject: Re: [SIMSOC] Disease models at the WP

It'll be a while before the dust settles, but to me the main lesson is that a lot of validation and data-driven modelling is not just useless but actually deleterious.

The Imperial College pandemic flu model is an individual-based data-driven model with location, census-dictated interactions, realistic households and schools. It was published in Nature in 2006 ( https://www.nature.com/articles/nature04795 ). It also provided completely garbage forecasts that would have had, according to the new numbers in, caused the death of approximately 500,000 UK residents.
What happened is that, since the data and parameters of COVID-19 aren't available or are uncertain, the modelers last week just used the old influenza parameters. This generated the expert advice of just suffer through it and get herd immunity.

The problem of data-driven model and validation is that then you need data to use it; under uncertain and changing conditions where data just isn't there, the model outputs complete nonsense. The classic type 3 error.
If you can't change parameters fast and away from what you validated ten years ago, your model has no use.
As the financial times reported (https://www.ft.com/content/249daf9a-67c3-11ea-800d-da70cff6e4d3:
The latest evidence suggests that 30 per cent of patients admitted to hospital with Covid-19 will need critical care in an intensive care unit, he said. Previous estimates, based on experience with viral pneumonia, were too low.
Critical care bed demand would be eight times capacity after mitigation measures were applied, and around 30 times capacity in both the US and UK in an “uncontrolled epidemic”.
This seems like a massive sensitivity analysis failure, but one that is probably hard to even diagnose with such a huge model.

But parameter uncertainty is actually not the main problem. The model, even as published yesterday, still can't simulate the strategies in place in Taiwan, Singapore and Korea: contact tracing + massive testing + selective containment. Partially this means that these policies can't be simulated for the UK and US so we don't know if they could be applied. But more importantly because the model doesn't simulate these patterns it then cannot be conditioned on them and therefore cannot update parameters according to the information they contain.

What can we learn from the success in Taiwan and the failure in Milan? My feeling is that this amorphous patterns are actually highly informative for policy-making. We all discount models that only produce "stylized patterns" but in this case that's precisely what we need and complicated models just don't seem to be flexible enough to even try and reproduce them.
########################################################################

To unsubscribe from the SIMSOC list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=SIMSOC&A=1

########################################################################

To unsubscribe from the SIMSOC list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=SIMSOC&A=1


To unsubscribe from the SIMSOC list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=SIMSOC&A=1




To unsubscribe from the SIMSOC list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=SIMSOC&A=1



To unsubscribe from the SIMSOC list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=SIMSOC&A=1