In a typical supervised classification procedure the availability of training samples has a fundamental effect on classifier performance. For a fixed number of training samples classifier performance is degraded as the number of dimensions (features) is increased. This phenomenon has a significant influence on the analysis of hyperspectral data sets where the ratio of training samples to dimensionality is small. Objectives of this research are to develop novel methods for mitigating the detrimental effects arising from this small ratio and to reduce the effort required by an analyst in terms of training sample selection. An iterative method is developed where semi-labeled samples (classification outputs) are used with the original training samples to estimate parameters and establish a positive feedback procedure wherein parameter estimation and classification enhance each other in an iterative fashion. This work is comprised of four discrete phases. First, the role of semi-labeled samples on parameter estimates is investigated. In this phase it is demonstrated that an iterative procedure based on positive feedback is achievable. Second, a maximum likelihood pixel-wise adaptive classifier is designed. Third, a family of adaptive covariance estimators is developed that combines the adaptive classifiers and covariance estimators to deal with cases where the training sample set is extremely small. Finally, to fully utilize the rich spectral and spatial information contained in hyperspectral data and enhance the performance and robustness of the proposed adaptive classifier, an adaptive Bayesian contextual classifier based on the Markov random field is developed.

Date of this Version