Your message (copied below) probably captures popular usage as close as
any alternatives anyone might want to suggest. There is one further
complication you need to be sensitive to though. Validation and
calibration (in your terms) can only be demonstrated relative to some
intended use for the model. This is clearly true, as no model can ever
perfectly capture all the details of a real system (nor would you want one
to). In fact, you typically wouldn't even want to capture all of your
beliefs in a single model (not parsimonious). One can only decide how
much, and what kind of, deviation between between model and system is OK
relative to some framework of what the model is being used for.
Our terminology gets us in trouble pretty frequently. When pushed to
be careful, nearly everyone's definition of validation includes a
"for an intended use" clause. However, people often speak of a model
being "validated" with no use specified. This might be OK if the use
was generally understood. But frequently, this imprecise terminology
is part of a pattern of fuzzy thinking that has led to all sorts of
mischief.
cheers
steve bankes
Steven C. Bankes, Ph.D
Evolving Logic Associates
3542 Greenfield Ave.
Los Angeles CA, 90034
(310) 836-0958
[log in to unmask]
>>>>>>>your message was:
>Thanks for those prompt replies. It seems to me that the situation can be
>expressed using the little FIXED_WIDTH font diagram below
>
>THEORY -- SIM_MODEL --> SIM_DATA
>++++++REALITY++++++ --> OBS_DATA
>
>There are several comparisons that the modeler would like to make.
>(1) Does my implementation of my simulation model (SIM_MODEL) correspond
>to the theory (THEORY) on which I am basing the model?
>(2a) Does the theory (THEORY) that underlies my model correspond to my
>beliefs (not observations) about the real world (REALITY)?
>(2b) Does the simulation model (SIM_MODEL) correspond to my beliefs (not
>observations) about the real world (REALITY)?
>(3) Do my simulated data (SIM_DATA) match the data I can observe in
>reality (OBS_DATA)?
>
>I think it is generally agreed that the _process_ to get the answer to
>question (3) to be 'yes' is called "calibration". I would agree with Paul
>Fishwick (citing Ziegler) that if the answer to question (1) is 'yes' then
>the model is "verified".
>
>Yet, there clearly is a difference between getting the SIM_DATA and
>OBS_DATA to match up and having the SIM_MODEL (or THEORY) have anything to
>do with REALITY. Thus, after some reflection on this issue, I would say
>that answering (2a) "yes" would mean that the theory is "valid" and
>answering (2b) "yes" would mean that the model is "valid".
>
>Of course this means that you can have almost any combination of "valid"
>or "invalid" models or theories, "verified" or "not verified" models, and
>"calibrated" or "uncalibrated" data.
>
>Scott (Moss): from what I read, you would disagree with this terminology.
>Am I right in this assumption? How would you make these distinctions?
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|