Model Verification

From Sigma

Jump to: navigation, search

An absolutely valid simulation model with all the detail and behavior of real life is probably not attainable, or even desirable. However, every simulation model should do what its creator intended. Ensuring that the computer code for the simulation model does what you think it is doing is referred to as the process of model verification.

There is a trade-off involved between validation and verification of a simulation model. Adding detail to a model makes the code more complicated. If correctly implemented, this detail will perhaps improve model validity. However, adding complex details can make code verification more difficult if not impossible. Identifying only the substantive details that are important to include in a simulation model is important and often subject to negotiation.

Really gross errors in a simulation code can be detected using standard statistical testing. For example, a classical paired t-test between the means of samples from the real world and those from simulation runs might be conducted. (A description of a t-test can be found in any introductory statistics textbook.) However, there may not be enough real-world data to reject a hypothesis that the model and real world data have nearly the same parameters. Moreover, the data itself may not be valid. (See Sources of Data)

Translating a model from computer code into clear language is a good exercise. It is interesting to contrast what two different people think a model is doing. You can also compare what you think your model is doing with what it thinks it is doing using the English translations generated by SIGMA.

An excellent tool for helping to verify a simulation model is an informal exercise called a "Turing test," named after a man who conjectured on the possibility of not being able to distinguish computing machines from real people. For a Turing test, actual blank forms used in the day-to-day management of a system are filled in with either simulated or real data. Only the blanks that are relevant for the purposes of the study are different. Managers and other people familiar with the system are then asked to identify the real and simulated documents and tell how each form was identified. It is vital that everyone know in advance that this exercise is likely to be repeated several times.

People familiar with these forms are usually more comfortable doing this exercise than they are in reviewing a computer code, evaluating a statistical analysis, or even observing an animation. Just the process of determining what data on each form are relevant to the study can make this exercise worthwhile.

A typical experience with a Turing test has been that the manager can immediately identify most, if not all, of the bogus forms! This is actually a good outcome; it indicates that the manager is paying attention, and it can lay the foundation for effective communication. A non-technical manager "winning" the first round may also help diffuse any antagonism that they may have developed toward the simulation project. What happens next is critical: when the manager tells how the simulated data was identified, changes to the simulation model should be made and the exercise repeated. It is the repetition of the exercise that is important, not the outcome of each iteration.

Statistical analysis to assess whether or not the outcome of the exercise is likely to result from guessing is presented in Schruben (1980). However, such formal analysis can actually be detrimental. It could easily inhibit communication and alienate the manager by moving the discussion to the unfamiliar ground of mathematical statistics.


Back to About

Personal tools