A computational method for stochastic impulse control problems
Stochastic control refers to the optimal control of systems subject to randomness. Impulse and singular controls are those that can instantaneously change the system state as opposed to the more common controls that change the rate of change of state. When the cost of control has a fixed component it is usually optimal to affect instantaneous changes that cause discontinuity in state evolution. Examples of impulse controls are abundant in several diverse areas that include finance, operations management and economics. ^ Stochastic impulse control problems are usually solved by converting them to differential equation problems using dynamic programming arguments. In all but the simplest of cases, the resulting differential equations for impulse control problems cannot be solved analytically. They are relatively harder to solve since they have free boundaries, which are unknown boundaries that need to be determined as part of the solution. ^ In this dissertation, we construct a transformation scheme that transforms the arising free boundary problem to a sequence of fixed-boundary problems that are easy to solve. We show that the arising sequence has monotonically improved solutions and that the sequence of solutions converge to the optimal solution. We also provide an ε-optimality result that establishes a bound on the difference from the optimal value function for any terminated iteration. Applications in finance and operations management are illustrated. ^
Hong Wan, Purdue University, Muthukumar Muthuraman, Purdue University.
Engineering, Industrial|Operations Research