Contributions to maximum penalized likelihood estimation

Chunfu Qiu, Purdue University

Abstract

The idea of maximum penalized likelihood estimation has appeared in the literatures on smoothing spline regression, penalized likelihood regression for non-Gaussian data, density estimation, hazard rate estimation, and Poisson intensity estimation. In this thesis, a generic formulation for the method is established for any model with a function as the parameter of interest. The resulting estimator seeks an appropriate balance between the goodness-of-fit and the smoothness of the estimator via minimizing the negative log likelihood and a roughness penalty. It is shown that the estimate exists and is unique under some mild conditions. The method is also justified from a Bayesian perspective. It is shown that the estimator is the limit of a sequence of posterior modes corresponding to partially noninformative Gaussian process priors. We then focus on maximum penalized likelihood density estimation. The logistic density transformation is introduced to overcome positivity and unity constraints. Under mild conditions, the rate of convergence of the estimator in terms of symmetrized Kullback-Leibler divergence is established. A semiparametric approximation for the estimator is presented, which lies in a data-adaptive finite dimensional function space and hence can be computed. Some numerical examples are given. Finally, we conduct an asymptotic analysis for penalized likelihood regression for analysis of data from a distribution in exponential family. Asymptotic convergence rates in terms of integrated symmetrized Kullback-Leibler divergence and a related mean squared error are obtained.

Degree

Ph.D.

Advisors

Gu, Purdue University.

Subject Area

Statistics

Off-Campus Purdue Users:
To access this dissertation, please log in to our
proxy server
.

Share

COinS