Contributions to Monte Carlo analysis: Variance reduction, random search, and Bayesian robustness
Abstract
This research studies the efficiency and application of Monte Carlo simulation. Contributions are made to variance reduction methodology, to random search algorithm design and analysis, and to the study of Bayesian robustness. We develop new variance reduction techniques to improve the efficiency of Monte Carlo simulation for analyzing stochastic systems. The first technique is approximation-assisted point estimation for combining a deterministic approximation with a Monte Carlo simulation estimator. Three estimators are investigated: (1) binary choice, (2) linear combination, and (3) Bayesian analysis. We find that taking a linear combination is the most effective method. The second technique is biased control-variate estimation, obtained by replacing the control-variate mean with a deterministic approximation. Three estimators are investigated: (1) natural estimator, (2) linear combination estimator, and (3) classical estimator. Sufficient and necessary conditions for a variance reduction are derived. Substantial variance reduction is possible only with the classical estimator. We study two random-search algorithms for global optimization of mathematical programming problems. For the Pure Adaptive Search (PAS) algorithm, we (1) show that PAS converges to the optimal solution with probability one, (2) show each PAS iteration reduces the expected remaining feasible-region volume by 50%, and (3) improve the Patel, Smith, and Zabinsky complexity measure. For the Improving Hit-and-Run (IHR) algorithm, we (1) show IHR converges to the optimal solution with probability one, (2) empirically obtain the limiting distribution of the standardized distance between the center of the feasible region and the current position after n iterations, and improve the Zabinsky, Smith, McDonald, Romeijn and Kaufman complexity measure. We also investigate methods for Monte Carlo integration to estimate Bayesian robustness. We study how the Bayesian posterior integral depends on the prior-distribution hyperparameters and we obtain information on this dependence via gradients, estimated using infinitesimal perturbation analysis (IPA). Sufficient conditions for interchanging expected value and differentiation are derived for applying the IPA method. Asymptotically valid standard error and confidence-interval estimators are suggested.
Degree
Ph.D.
Advisors
Schmeiser, Purdue University.
Subject Area
Industrial engineering|Statistics|Mathematics
Off-Campus Purdue Users:
To access this dissertation, please log in to our
proxy server.