JiscMail Logo
Email discussion lists for the UK Education and Research communities

Help for ALLSTAT Archives


ALLSTAT Archives

ALLSTAT Archives


allstat@JISCMAIL.AC.UK


View:

Message:

[

First

|

Previous

|

Next

|

Last

]

By Topic:

[

First

|

Previous

|

Next

|

Last

]

By Author:

[

First

|

Previous

|

Next

|

Last

]

Font:

Proportional Font

LISTSERV Archives

LISTSERV Archives

ALLSTAT Home

ALLSTAT Home

ALLSTAT  July 2008

ALLSTAT July 2008

Options

Subscribe or Unsubscribe

Subscribe or Unsubscribe

Log In

Log In

Get Password

Get Password

Subject:

Last call to register for "Complex Computer Models"

From:

Petra Graham <[log in to unmask]>

Reply-To:

Petra Graham <[log in to unmask]>

Date:

Mon, 14 Jul 2008 13:29:02 +1000

Content-Type:

text/plain

Parts/Attachments:

Parts/Attachments

text/plain (86 lines)

(Apologies for cross posting)

Hi Everyone, 

A workshop on Calibration and Validation of Computer Models is being held as a satellite event to ISBA08, on the 27th and 28th of July at
Macquarie University, Sydney.  Abstracts for each of the  workshop speakers are available below and on the "Complex Computer Models" link in the ISBA
2008 webpage:     http://www.isba2008.sci.qut.edu.au/workshops2008.shtml#sydney 

Please register before Friday 18th July (this Friday!).

Titles  and abstracts are provided below:

Susie Bayarri: Assessing the risk of catastrophic events by combining statistical and computer models

Abstract:
Risk assessment of rare natural hazards -- such as large volcanic pyroclastic flows -- is addressed. Assessment is approached through a combination of computer modeling, statistical modeling, and extreme-event probability computation. A computer model of the natural hazard is utilized to provide the needed extrapolation to unseen parts of the hazard space. Statistical modeling of the available data is needed to determine the initializing distribution for exercise of the computer model. In dealing with rare events, direct simulations involving the computer model are prohibitively expensive. Solution instead requires a combination of adaptive design of computer model approximations (emulators) and rare event simulation. The techniques that are developed for risk assessment are illustrated on a test-bed example involving volcanic flow.

Tiangang Cui: Statistical inversion and Markov Chain Monte Carlo methods in geothermal model calibration

Colin Fox: TBA

James Gattiker: On design for parameter inference in emulators

Abstract:
In the study of computer models, statistical approximations of simulation responses over a parameter space allow analytical approaches that are otherwise out of reach when simulations are expensive and data is sparse. Design for constructing accurate emulators has several open questions; we examine the interplay of the choice of correlation function, the inference of correlation function parameters, and the effect of predictive accuracy, on Gaussian process emulators. Our approach to design is to examine a hybrid method of pseudorandom sequences and optimal design based on optimizing Fisher Information for parameter inference. We present the results of simulation studies of parameter inference and design, and discuss the implications with respect to the problem of climate modeling.

Dave Higdon: Bayesian approaches for combining experimental data and computer models

Abstract:
By augmenting experiments with detailed simulation-based physical models one can greatly leverage the amount of information that even a limited set of experiments can provide.  This tutorial describes Bayesian modeling and estimation techniques that may be used to combine these two sources of information.  These methods include designing simulation campaigns, modeling simulation output, estimation - or calibration - of key simulation model parameters, and accounting for major sources of uncertainty.  Various response surface models will be discussed, as will model formulations for combining the various sources of information.

Leanna House: Second order exchangeable emulators to assess initial condition uncertainty

Abstract:
We address the uncertainty of deterministic computer models that rely on both input parameters and initial conditions. We refer to such models as semi-deterministic.  Purely deterministic computer models either do not have an initial condition or fix (without error bounds) the value for the initial condition so that the same output will result from one set of input parameter values, even when the model is implemented multiple times.  Semi-deterministic models however, allow the condition to vary, and thus have the potential to produce more than one result per input.  When multiple outcomes per input are present, current approaches rely primarily on summary statistics (e.g., mean and variance per input), and apply standard deterministic model uncertainty analysis approaches.  However, inferences based solely on such statistics require implicitly strong assumptions which we are unwilling to make.  Thus, we introduce the notion of latent computer model outcomes which correspond to the results of the semi-deterministic model when using the appropriate, but unknown, initial condition for the physical system of interest.  The goal for this paper is to make inferences about the latent model given a sequence of realized semi-deterministic model evaluations.  We consider the sequence elements to be second order exchangeable and use Bayes linear methods to assess the posterior expectation and variance of the latent model given the realised evaluations.  We demonstrate our methods using semi-deterministic results from a galaxy formation model called Galform that relies on initial specifications of dark matter.

Jason Loeppky: Choosing the sample size of a computer experiment

Abstract:
In recent years virtual experiments implemented by a complex computer code or mathematical model are supplementing or even replacing physical experiments.  The  computer code mathematically describes the relationship between several input variables and one or more output variables. Often the computer models in question can be computationally demanding. Thus, direct evaluation of the code for optimization or validation is not possible in general.  The general strategy is to build a statistical model to act a surrogate or an emulator of the true code.  A long used rule of thumb for sample size takes a runs size that is 10 times the number of active dimensions.  In this talk we investigate this rule of thumb for a variety of problems encountered in practice.  In some cases we will show that increasing the sample size has a large effect on the prediction quality and in other cases increasing the sample size has little to no effect.  These issues will be demonstrated using a model for polar ice caps and a model for the ligand activation of a G-Protein in yeast.

Jeremy Oakley: Decision-theoretic sensitivity analysis for complex computer models

Abstract:
We consider the use of computer models in decision-making, and use decision-theoretic arguments to conduct a sensitivity analysis based on the expected value of perfect information for quantifying the 'importance' of each uncertain input parameter in a model. Standard Gaussian process emulators are used for efficient computation, and we address the problem of quantifying uncertainty in the sensitivity analysis results due to the use of an emulator with limited model runs.

Jonty Rougier: Bayes linear prediction with mulitple treatments: application to avalanche modeling

Abstract:
We have steady-state snow velocity profiles from ten large-chute experiments, where each experiment takes place under different environmental conditions.  Based on these we would like to predict the velocity profile across the full range of environmental conditions.  This large number of observations and predictands poses challenges for fully-probabilistic methods, but can be easily handled within a Bayes linear approach.  We show how multiple treatments can be incorporated into the 'standard' model-based inference, and illustrate a detailed elicitation for such an inference.  This is joint work with Martin Kern at the Swiss Federal Institute for Snow and Avalanche Research, Davos.

Leonardo Soares Bastos: Diagnostics for Gaussian process emulators

Abstract:
Mathematical models, usually implemented in computer programs known as simulators, are widely used in all areas of science and technology to represent complex real-world phenomena. Simulators are often sufficiently complex that they take appreciable amounts of computer time or other resources to run. In this context, a methodology has been developed based on building a statistical representation of the simulator, known as an emulator. The principal approach to building emulators uses Gaussian processes. This work presents some diagnostics to validate and assess the adequacy of a Gaussian process emulator as surrogate for the simulator. These diagnostics are based on comparisons between simulator outputs and Gaussian process emulator outputs for some test data, known as validation data, defined by a sample of simulator runs not used to build the emulator. Our diagnostics take care to account for correlation between the validation data.

David van Dyk:  Statistical Analysis of stellar evolution

Abstract
Color Magnitude Diagrams (CMDs) are plots that compare the magnitudes (luminosities) of stars in different wavelengths of light (colors).  High non-linear correlations among the mass, color and surface temperature of newly formed stars induce a long narrow curved point cloud in a CMD known as the main sequence. Aging stars form new CMD groups of red giants and white dwarfs.  The physical processes that govern this evolution can be described with mathematical models  and explored using complex computer models. These calculations are designed to predict the plotted magnitudes as a function of the parameters of scientific interest such as stellar age, mass, and metallicity. Here, we describe how we use the computer models as complex likelihood functions in a Bayesian analysis that requires sophisticated computing, corrects for contamination of the data by field stars, accounts for complications caused by binary stars, and aims to compare competing physics-based computer models of stellar evolution.


Richard Wilkinson: Calibrating computer models with high dimensional output

Abstract:
I will consider the calibration of complex computer models which produce highly multivariate output, typically time-series or spatio-temporal fields.  Directly emulating these models is a computationally demanding task, and may  not be possible for models with very high dimensionality. An alternative approach is to reduce the number of dimensions using a basis representation, for example the  principal components, and emulate the computer model output using this reduced latent space representation. However, the data reduction will not typically produce an accurate representation of the field data, and so it is necessary to perform any calibration on the data space rather than the latent space so that reconstruction error is accounted for in the model parameters. I will illustrate these ideas on the UVic Earth system climate model.

The workshop also includes a panel discussion moderated by  Jim Berger

If you have any questions about the workshop please email Petra Graham
[log in to unmask]  or  [log in to unmask] 

Best wishes and hope to see you there,

Petra.

Dr Petra Graham
Department of Statistics
Division of Economic and Financial Studies
Macquarie University
Sydney NSW 2109
Australia

Ph: +61 2 9850 6138
Fax: +61 2 9850 7669

Top of Message | Previous Page | Permalink

JiscMail Tools


RSS Feeds and Sharing


Advanced Options


Archives

July 2019
June 2019
May 2019
April 2019
March 2019
February 2019
January 2019
December 2018
November 2018
October 2018
September 2018
August 2018
July 2018
June 2018
May 2018
April 2018
March 2018
February 2018
January 2018
December 2017
November 2017
October 2017
September 2017
August 2017
July 2017
June 2017
May 2017
April 2017
March 2017
February 2017
January 2017
December 2016
November 2016
October 2016
September 2016
August 2016
July 2016
June 2016
May 2016
April 2016
March 2016
February 2016
January 2016
December 2015
November 2015
October 2015
September 2015
August 2015
July 2015
June 2015
May 2015
April 2015
March 2015
February 2015
January 2015
December 2014
November 2014
October 2014
September 2014
August 2014
July 2014
June 2014
May 2014
April 2014
March 2014
February 2014
January 2014
December 2013
November 2013
October 2013
September 2013
August 2013
July 2013
June 2013
May 2013
April 2013
March 2013
February 2013
January 2013
December 2012
November 2012
October 2012
September 2012
August 2012
July 2012
June 2012
May 2012
April 2012
March 2012
February 2012
January 2012
December 2011
November 2011
October 2011
September 2011
August 2011
July 2011
June 2011
May 2011
April 2011
March 2011
February 2011
January 2011
December 2010
November 2010
October 2010
September 2010
August 2010
July 2010
June 2010
May 2010
April 2010
March 2010
February 2010
January 2010
December 2009
November 2009
October 2009
September 2009
August 2009
July 2009
June 2009
May 2009
April 2009
March 2009
February 2009
January 2009
December 2008
November 2008
October 2008
September 2008
August 2008
July 2008
June 2008
May 2008
April 2008
March 2008
February 2008
January 2008
December 2007
November 2007
October 2007
September 2007
August 2007
July 2007
June 2007
May 2007
April 2007
March 2007
February 2007
January 2007
2006
2005
2004
2003
2002
2001
2000
1999
1998


JiscMail is a Jisc service.

View our service policies at https://www.jiscmail.ac.uk/policyandsecurity/ and Jisc's privacy policy at https://www.jisc.ac.uk/website/privacy-notice

Secured by F-Secure Anti-Virus CataList Email List Search Powered by the LISTSERV Email List Manager