Models

From Sigma

Jump to: navigation, search

A model can be defined as:

A system used as a surrogate for another system.

In typical computer simulation models, a system with mathematical entities is used as a surrogate for a system with physical entities. When we use the word system, without qualification, we are referring to a real or hypothetical system that is the subject of the simulation study.

Mathematical models are conceptual abstractions of a particular aspect of a system. The mathematics we will be using include probability, statistics, and graphs. When we use the word model, without qualification, we will be referring to a graphical description of a system called an event graph. Finally, simulations will refer to computer programs developed from event graph models. Simulations will be our methodology for studying the model. A model serves as the interface between a system and a methodology for studying the system.

When evaluating a simulation, it is important to differentiate modeling a system from coding a model. Whether or not a simulation is "good" is more or less objective. A good simulation is a completely faithful rendition of a good model; nothing in the model is lost in the code. The process of testing if the simulation is good is called simulation verification. This is often much more complicated than verification of other types of computer programs. However, there is no conceptual difficulty in defining a good program as being error free. Of course, the most one can honestly certify about any computer program is that it currently has no known errors.

Defining what constitutes a "good" model is much more subjective. A good model is based on good assumptions. Good assumptions make the simulation more efficient or the system easier to understand while costing little in terms of validity. Driving a good bargain between model simplicity and model validity is the essence of the art of modeling.

A purist view of validity is that a model will be valid as long as it is based on explicit assumptions and the implications of the assumptions are well understood. From this academic viewpoint, a modeling error is not correctly stating and applying all model assumptions. If we take the pragmatist's view of a good model as contributing to correct decisions, it is possible to test the effects of including a particular detail or making a certain assumption by comparing the behavior of the simulation with and without the detail or assumption. Simulation is one of the few methodologies that allows testing the robustness of models with different assumptions. Perhaps the biggest danger in simulation modeling is including too much detail in the model. An experienced consultant in the field once remarked that he could tell a novice at simulation by the excessive amount of detail in his or her models.

A technique for keeping model details at a reasonable level is to focus on the similarities among the entities in the system rather than the differences. If transient entities (customers, jobs, messages, etc.) can be treated as identical, you can develop a valid model that merely keeps track of the numbers of transient entities at various stages of their progress through a system. This makes it unnecessary to have a detailed record for each individual entity. Such a model would require updating relatively few integers (the counts) instead of creating and maintaining separate records for every transient entity in the system. Similarly, treating resident entities (servers, machines, buffers, etc.) as identical allows you to maintain counts of the numbers of resident entities in the various states of their process cycles rather than keeping a record of the status of each entity. In situations where there are a great many transient entities in the system at one time, it might be necessary to treat transient entities as identical. It is certainly more efficient to have a single integer variable that counts transient entities than it is to create and maintain thousands of complete records.

It is natural to notice differences between entities in a system. However, a valuable modeling skill is the ability to recognize similarities. Differences are modeled only when they are essential to the validity of the study results. It is also natural to include detail unless there are solid reasons for assuming that it can be omitted. When building simulation models of complicated systems, it is good practice to require justification for including detail. Even when the differences in entities are thought to be important, a skilled modeler will be able to define groups of entities that can be treated as identical.

Sometimes the activity of developing a simulation model has as much value as the model itself. Building a model forces us to identify our objectives, determine constraints, quantify our knowledge, and expose our misconceptions. It could be argued that a study has merit even if its recommendations are never implemented and that a simulation model has value even if it never runs. Of course, this is of little comfort to the student who fails a homework assignment or an engineer who must try to find another job. It is vastly more satisfying to professors and employers if the simulation model runs and the recommendations from the study are adopted. The motivation for the development of event graphs was to make simulation models easier to build and verify. With event graphs, it is much easier to verify that your simulation program reflects the way you have modeled the system than it is to validate that the model actually can be used to imitate the relevant behavior of the real system.


Back to About

Personal tools