Thanks for those prompt replies. It seems to me that the situation can be
expressed using the little FIXED_WIDTH font diagram below
THEORY -- SIM_MODEL --> SIM_DATA
++++++REALITY++++++ --> OBS_DATA
There are several comparisons that the modeler would like to make.
(1) Does my implementation of my simulation model (SIM_MODEL) correspond
to the theory (THEORY) on which I am basing the model?
(2a) Does the theory (THEORY) that underlies my model correspond to my
beliefs (not observations) about the real world (REALITY)?
(2b) Does the simulation model (SIM_MODEL) correspond to my beliefs (not
observations) about the real world (REALITY)?
(3) Do my simulated data (SIM_DATA) match the data I can observe in
reality (OBS_DATA)?
I think it is generally agreed that the _process_ to get the answer to
question (3) to be 'yes' is called "calibration". I would agree with Paul
Fishwick (citing Ziegler) that if the answer to question (1) is 'yes' then
the model is "verified".
Yet, there clearly is a difference between getting the SIM_DATA and
OBS_DATA to match up and having the SIM_MODEL (or THEORY) have anything to
do with REALITY. Thus, after some reflection on this issue, I would say
that answering (2a) "yes" would mean that the theory is "valid" and
answering (2b) "yes" would mean that the model is "valid".
Of course this means that you can have almost any combination of "valid"
or "invalid" models or theories, "verified" or "not verified" models, and
"calibrated" or "uncalibrated" data.
Scott (Moss): from what I read, you would disagree with this terminology.
Am I right in this assumption? How would you make these distinctions?
Rob Bernard
-- Robert N. Bernard
-- [log in to unmask]
-- http://www.walrus.com/~minusone
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|