Computer models are
utilized in such diverse fields as: system reliability, risk
analyses, safety and performance assessments; engineering
applications, factory production, mechanical design, weather
forecasting, economic forecasting, and even the transmission of the
HIV virus. The proper exercising of such models presents many
challenges for the analyst such as:
- Models have many
(perhaps hundreds) uncertain input parameters
- Dependencies
frequently exist among some of the input parameters
- Models produce
many different outputs
- Model output is
frequently time dependent
- Models are
characterized by their mathematical complexity (frequently
systems of nonlinear differential equations)
- Model calculations
can be very time consuming
Analysts
have long recognized that uncertainty in the model output(s) must be
characterized and the dominant contributors to the uncertainty need
to be identified. In addition, the principal contributors to the
magnitude of the output(s) must be identified. Uncertainty and
sensitivity analyses are used to assist analysts in achieving these
goals. A correct characterization of the uncertainty in the output(s)
is dependent on a correct characterization of the uncertainty in the
input(s). There are many ways in which input uncertainty can be
characterized including use of:
- Application
specific data
- Generic test data
- Data obtained from
experiments
- Engineering
judgment combined with testing results
- Expert opinion
Analysts
desiring to perform uncertainty and sensitivity analyses with their
computer models need techniques that are:
- Robust
- Easy to implement
- Cost effective
- Pass a common
sense test
- Provide
reproducible results
Monte
Carlo techniques are commonly used to address these needs. Monte
Carlo methods are employed to generate sample observations from
probability distributions that are used to characterize the
uncertainty in computer model inputs. *Simple random sampling*
and *Latin hypercube sampling* are two popular Monte Carlo
methods. Most analysts are familiar with simple Monte Carlo, which
is based on simple random sampling. Latin hypercube sampling (LHS)
was developed in 1975. LHS is based on stratified sampling and
provides an *efficient* method for the generation of
observations for each of the inputs to the computer model.
The use of the word *efficient*
may appear to be a misnomer to for those familiar with Monte Carlo
techniques, since Monte Carlo is frequently associated with
thousands of computer runs where computer cost and time are of
little concern. However, efficiency is a concern in computer
modeling applications that tax the limits of the computer and
require expensive and time-consuming calculations. Moreover,
analysts have historically kept ahead of the advancement of computer
technology by developing ever more complicated and computationally
difficult computer models. As an example, the computer modeling of
risk at a nuclear power plant is very complex process that
frequently utilizes input based on 160 to 180 different uncertain
variables. Safety assessment calculations for a plant typically
involve 200 to 250 computer model runs that can take up to a week to
perform. This situation is not unique to safety assessment
calculations as many other areas of application such as complex
system reliability, weather forecasting and economic predictions
share these same concerns for efficiency |