Model selection for high dimensional problems with application to function estimation

Arijit Chakrabarti, Purdue University

Abstract

The problem of selecting a model in infinite or high dimensional setup has been of great interest in the recent years. The high dimensional problems typically arise when the number of possible parameters increases with increasing sample size, while the infinite dimensional problems are usually nonparametric in nature. In this thesis we consider two such settings and study the behavior of well known model selection criteria and propose new criteria with optimal properties. Using a complete orthonormal basis of L2, the unknown drift function in the White-Noise model can be represented as an infinite linear combination of the basis functions, the coefficients being the (unknown) Fourier coefficients and thus the problem reduces to one of estimating the vector of Fourier coefficients. It is shown that model selection by the Akaike Information Criterion (AIC) or some suitable variants of it (where under each model, all but first finitely many Fourier coefficients are assumed to be zero), followed by least squares estimation, achieve the asymptotic minimax rate of convergence (over an appropriate subset of the parameter space) for squared error loss. A Bayesian Model Selection rule followed by Bayes estimates is also shown to achieve the same rate of convergence asymptotically. A simulation study is then carried out to apply these rules and some other standard techniques in the closely related nonparametric regression problem, and the performances of different estimation procedures are compared. It is known that BIC may be an inappropriate model selection criterion and a poor approximation to integrated likelihoods in some high dimensional problems. We propose a generalization GBIC of BIC, which approximates the logarithm of the integrated likelihood up to O(1) and a Laplace approximation to the integrated likelihood correct up to o(1) in a high dimensional setup when the observations come from the exponential family of distributions. Rates of convergence of the Laplace approximation are found out for specific examples. Extensive simulation results show that GBIC performs much better than BIC and the Laplace approximation performs wonderfully well in many examples, including some non-exponential family examples.

Degree

Ph.D.

Advisors

Ghosh, Purdue University.

Subject Area

Statistics

Off-Campus Purdue Users:
To access this dissertation, please log in to our
proxy server
.

Share

COinS