Development of expected posterior prior distributions for model comparisons

Jose Miguel Perez, Purdue University

Abstract

Consider the problem of comparing parametric models M 1,…,Mk, when at least one of the models has an improper prior [special characters omitted] As is well known, using the Bayes factor for comparing among these is then not feasible due to arbitrary multiplicative constants in π N(&thetas;i). Many methods has been suggested to overcome this problem. One prominent class of techniques is based on using a (possible imaginary) training sample y* to update each [special characters omitted] This is then treated as a prior for the model. In this work we suggest adjusting the initial priors for each model, [special characters omitted] by [special characters omitted] where m* is a suitable predictive measure on the (imaginary) training sample space and, as in the Intrinsic Bayes Factor, y* is of minimal size. The updated prior, π*, is called the expected posterior prior under m*. Some properties of this approach are: (1) The resulting Bayes factors depend only on the sufficient statistics. (2) The resulting Bayesian inference is coherent and allows for multiple comparisons. (3) In many cases, it is possible to find m* such that, for a sample of minimal size, there is predictive matching for the comparisons of model Mi to Mj i.e., the Bayes factor Bij = 1. (4) In the case of nested models, where M1 is nested in every other model, choosing m *(y*) to be the marginal of y* under M1 is asymptotically equivalent to the arithmetic Intrinsic Bayes Factor (Berger and Pericchi, 1996). The expected posterior prior scheme can be applied to a wide variety of statistical problems. Applications to the selection of linear models and for the default analysis of mixture models are shown.

Degree

Ph.D.

Advisors

Berger, Purdue University.

Subject Area

Statistics

Off-Campus Purdue Users:
To access this dissertation, please log in to our
proxy server
.

Share

COinS