Adaptive Sampling Fixed-step and Line Search Methods for Stochastic Optimization

Hui Tan, Purdue University

Abstract

For the unconstrained optimization problem [Special characters omitted] f(x) (P) where the function f or its gradient ∇ f are not directly accessible except through Monte Carlo estimates, we present three solution algorithms: fixed-step for infinite population sampling, fixed-step for finite population sampling, and line search for infinite population sampling. The salient feature of each of these algorithms is that the Monte Carlo sampling is adaptive to the algorithm trajectory, sampling little when the algorithm iterates are assessed to be far away from a first-order critical point and sampling more when the algorithm iterates are assessed to be close to a first-order critical point. We show that a specific form of such adaptive sampling that balances the squared bias and variance of gradient estimates achieves global convergence to a first-order critical point in addition to enjoying the fastest achievable convergence under Monte Carlo sampling. Our numerical experience on popular example problems shows promise.

Degree

M.S.I.E.

Advisors

Wan, Purdue University.

Subject Area

Industrial engineering

Off-Campus Purdue Users:
To access this dissertation, please log in to our
proxy server
.

Share

COinS