Southwest Technology Consultants

  Excellence In:
Statistical Consulting
Statistical Training
Data Analysis
Uncertainty and Sensitivity Analysis for Computer Modeling Applications
Introduction

Computer models are utilized in such diverse fields as: system reliability, risk analyses, safety and performance assessments; engineering applications, factory production, mechanical design, weather forecasting, economic forecasting, and even the transmission of the HIV virus. The proper exercising of such models presents many challenges for the analyst such as:

  • Models have many (perhaps hundreds) uncertain input parameters
  • Dependencies frequently exist among some of the input parameters
  • Models produce many different outputs
  • Model output is frequently time dependent
  • Models are characterized by their mathematical complexity (frequently systems of nonlinear differential equations)
  • Model calculations can be very time consuming

Analysts have long recognized that uncertainty in the model output(s) must be characterized and the dominant contributors to the uncertainty need to be identified. In addition, the principal contributors to the magnitude of the output(s) must be identified. Uncertainty and sensitivity analyses are used to assist analysts in achieving these goals. A correct characterization of the uncertainty in the output(s) is dependent on a correct characterization of the uncertainty in the input(s). There are many ways in which input uncertainty can be characterized including use of:

  • Application specific data
  • Generic test data
  • Data obtained from experiments
  • Engineering judgment combined with testing results
  • Expert opinion

Analysts desiring to perform uncertainty and sensitivity analyses with their computer models need techniques that are:

  • Robust
  • Easy to implement
  • Cost effective
  • Pass a common sense test
  • Provide reproducible results

Monte Carlo techniques are commonly used to address these needs. Monte Carlo methods are employed to generate sample observations from probability distributions that are used to characterize the uncertainty in computer model inputs. Simple random sampling and Latin hypercube sampling are two popular Monte Carlo methods. Most analysts are familiar with simple Monte Carlo, which is based on simple random sampling. Latin hypercube sampling (LHS) was developed in 1975. LHS is based on stratified sampling and provides an efficient method for the generation of observations for each of the inputs to the computer model.

The use of the word efficient may appear to be a misnomer to for those familiar with Monte Carlo techniques, since Monte Carlo is frequently associated with thousands of computer runs where computer cost and time are of little concern. However, efficiency is a concern in computer modeling applications that tax the limits of the computer and require expensive and time-consuming calculations. Moreover, analysts have historically kept ahead of the advancement of computer technology by developing ever more complicated and computationally difficult computer models. As an example, the computer modeling of risk at a nuclear power plant is very complex process that frequently utilizes input based on 160 to 180 different uncertain variables. Safety assessment calculations for a plant typically involve 200 to 250 computer model runs that can take up to a week to perform. This situation is not unique to safety assessment calculations as many other areas of application such as complex system reliability, weather forecasting and economic predictions share these same concerns for efficiency


What You Will Learn

Attendees will gain an understanding of the usefulness of Monte Carlo techniques, with special emphasis on Latin hypercube sampling (LHS). Techniques for performing sensitivity and uncertainty analyses will be emphasized. In particular, attendees will learn how to:

  • Generate a simple Monte Carlo sample
  • Generate a simple Latin hypercube sample
  • Compare efficiencies of simple Monte Carlo and LHS
  • Methods for correlating inputs in simple Monte Carlo and LHS samples when dependencies exist among inputs
  • Methods for quantifying the uncertainty in model inputs
  • Methods of analyzing input-output relationships
  • Sensitivity analysis techniques for determining dominant inputs
  • Sensitivity analysis techniques when the model output is temporal
  • Uncertainty analysis techniques for determining which inputs contribute to uncertainty in the output
  • Techniques for simplifying computer models
  • Communicate results to decision makers
Course Content

The following topics will be covered:

  • Generating a simple Monte Carlo sample
  • Generating a Latin hypercube sample
  • Correlating inputs when dependencies exist
  • Quantifying the uncertainty in model inputs
  • Displaying input-output relationships
  • Techniques for performing sensitivity analysis
  • Techniques for performing uncertainty analysis
  • Reducing the complexity of computer models
  • Application to an actual computer models
 

 


Contact Information

Phone: 505 856-6500

Email: stc@swtechcon.com

face="arial, palatino" color=navy size=2>